Thanks to one of the posts suggested by Wayne, I used Martin Añazco's tip, setting auto_adjust to False...works fine.
you cna use the bleow link to test i out
Does anyone knows the solution for this issue. Even using a proxy is not working
DirList = {a([a.isdir]&~startsWith( {a.name},".")).name}
Después de probar tantísimas cosas.... funcionó con tu solución. Si hiciste algo más después, se agradece la info. Gracias!!
It turns out that this was a bug. I submitted it on casbin's github and it was fixed in January 2025, refer https://github.com/casbin/pycasbin/issues/358#event-15896058893
That is great; thank you for sharing it.
The file needed to be provided as stated below example 3-7
YouTube Data API - Can't access /members
endpoint despite being in YouTube Partner Program (403 error)
I'm facing the same issue! Our stakeholder is part of the YouTube Partner Program, which, according to the Google support team, is required to access the /members
endpoint if you don't have a direct Google or YouTube representative.
Here's what I have in place:
My app is verified in the Google Cloud Console with the following confidential YouTube API scopes:
https://www.googleapis.com/auth/youtube.readonly
https://www.googleapis.com/auth/youtube.channel-memberships.creator
The account requesting access to the /members
endpoint is part of the YouTube Partner Program.
I'm trying to call this endpoint:
GET https://www.googleapis.com/youtube/v3/members
Using the channel account (the one in the Partner Program) via the application that has the required scopes.
Here’s the response I get:
{
"message": "Request failed with status code 403",
"name": "AxiosError",
"config": {
"headers": {
"Accept": "application/json, text/plain, */*",
"Authorization": "Bearer _token",
"User-Agent": "axios/1.7.2",
"Accept-Encoding": "gzip, compress, deflate, br"
},
"params": {
"part": "snippet",
"maxResults": 50
},
"method": "get",
"url": "https://www.googleapis.com/youtube/v3/members"
},
"code": "ERR_BAD_REQUEST",
"status": 403
}
I've searched extensively and found many developers encountering the same issue, but no confirmed solutions.
Digging deeper, I found a Stack Overflow thread that discusses integrating YouTube memberships with a Discord server. They seem to have found a workaround by using the Discord API to manually check if a user who is a YouTube member also exists in a Discord server. However, this feels like a problem transfer, not a proper solution.
Has anyone successfully accessed the /members
endpoint without a direct Google/YouTube rep, but using only the YouTube Partner Program eligibility?
Is there something specific or hidden required beyond scopes and being in the Partner Program?
Are there additional permissions or manual approvals that need to be done in the Google Cloud Console or YouTube Studio?
Is there any solution to this issue, I faced that issue and tried with * and without it but still have access denied.
Resolved by following this answer:
github.com/Yelp/elastalert/issues/1927#issuecomment-1054215307
follow LangGraph instruction to install it on windows, works like a charm
https://github.com/pygraphviz/pygraphviz/blob/main/INSTALL.txt
@CodeChops, can you please guide how did you configure keycloak in server-side and client-side projects ?
@Pravallika KV
Answered here so I can add screenshots; otherwise, I would have made it a comment:)
I can't get that to work; I tried setting FUNCTIONS_EXTENSION_VERSION; am I missing something?
I created a brand new app, did not load any code, and got version 4.1037.1.1
Then, I went and changed FUNCTIONS_EXTENSTION_VERSION
And now it refuses to come up again
may i know if u found solution to this because im facing it right now...
Esse problema parece ser no próprio Android Studio, pois já fiz todos passo citados acima e outros e mesmo assim não funciona, continua com o mesmo erro.
I’m actually facing the same issue — in my case, the Snackbar message never appears on the screen at all. I reached out to the BrowserStack team regarding this, but unfortunately, I haven’t received any concrete or helpful feedback so far. :(
I have the same problem, do you have a possible solution for this?
PLEASE STOP SPAMMING/BEING ANNOYING
I'm currently experiencing the same issue.
Did you manage to find the answer?
Not an answer but I am curious if you were able to find a good solution, I would be very intereted to know, thank you!
I have the same error now, did u find the fix?
did you manage to make it work? I'm stuck with the same issue
Thanks to @bbhtt over on Flatpak Matrix. He said I should use org.gnome.Platform and org.gnome.Sdk rather than the freedesktop runtimes because they already have Gtk installed.
const initClient = async () => {
try {
const res = await fetch('/api/get-credentials', {
method: 'GET',
headers: { 'Content-Type': 'application/json' },
});
if (!res.ok) throw new Error(`Failed to fetch credentials: ${res.status}`);
const { clientId } = await res.json();
if (!clientId) {
addLog('Client ID not configured on the server');
return null;
}
const client = window.google.accounts.oauth2.initTokenClient({
client_id: clientId,
scope: 'https://www.googleapis.com/auth/drive.file https://www.googleapis.com/auth/userinfo.email',
callback: async (tokenResponse) => {
if (tokenResponse.access_token) {
setAccessToken(tokenResponse.access_token);
localStorage.setItem('access_token', tokenResponse.access_token);
const userInfo = await fetch('https://www.googleapis.com/oauth2/v3/userinfo', {
headers: { 'Authorization': `Bearer ${tokenResponse.access_token}` },
});
const userData = await userInfo.json();
setUserEmail(userData.email);
const userRes = await fetch('/api/user', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email: userData.email }),
});
const userDataResponse = await userRes.json();
addLog(userDataResponse.message);
try {
const countRes = await fetch('/api/get-pdf-count', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email: userData.email }),
});
const countData = await countRes.json();
setPdfCount(countData.count || 0);
addLog(`Initial PDF count loaded: ${countData.count || 0}`);
} catch (error) {
addLog(`Failed to fetch initial PDF count: ${error.message}`);
}
markAuthenticated();
} else {
addLog('Authentication failed');
}
},
});
return client;
} catch (error) {
addLog(`Error initializing client: ${error.message}`);
return null;
}
};
This is a snippet of the code I am trying to use drive.file scope but its not working as I want. How to fix this?
Thanks!
refer this screenshot from the flutter deprecated api docsflutter deprecated api docs
A working example and some more explanation is given here:
https://www.codeproject.com/Articles/1212332/64-bit-Structured-Exception-Handling-SEH-in-ASM
Is v4.01 supported now? I am getting the following error: 'The version '4.01' is not valid. [HTTP/1.1 400 Bad Request]'"
have you find it, l also need help
I've tried the same two methods without any luck. Service accounts can use the API but can't be connected as collaborator to notes (I would prefer this feature!)
With Oauth setup Google Keep scopes can't be added.
I feel like this is a dead end?!?!?
Anyone got updates on this?
Here is the new location of Google’s Closure Compiler https://jscompressor.treblereel.dev/
Testimoni testimoni testimoni testimoni
I have tried the below script and able to get missing files but not missing folders. Could you please help us getting the right script which can list even missing folders also.
# Prompt for the paths of the two folders to compare
$folder1 = "C:\Users\User\Desktop\Events"
$folder2 = "C:\Users\User\Desktop\Events1"
Write-Host "From VNX:"(Get-ChildItem -Recurse -Path $folder1).Count
Write-Host "From UNITY:"(Get-ChildItem -Recurse -Path $folder2).Count
# Get the files in each folder and store their relative and full paths
# in arrays, optionally without extensions.
$dir1Dirs, $dir2Dirs = $folder1, $folder2 |
ForEach-Object {
$fullRootPath = Convert-Path -LiteralPath $_
# Construct the array of custom objects for the folder tree at hand
# and *output it as a single object*, using the unary form of the
# array construction operator, ","
, @(
Get-ChildItem -File -Recurse -LiteralPath $fullRootPath |
ForEach-Object {
$relativePath = $_.FullName.Substring($fullRootPath.Length + 1)
if ($ignoreExtensions) { $relativePath = $relativePath -replace '\.[^.]*$' }
[PSCustomObject] @{
RelativePath = $relativePath
FullName = $_.FullName
}
}
)
}
# Compare the two arrays.
# Note the use of -Property RelativePath and -PassThru
# as well as the Where-Object SideIndicator -eq '=>' filter, which
# - as in your question - only reports differences
# from the -DifferenceObject collection.
# To report differences from *either* collection, simply remove the filter.
$diff =
Compare-Object -Property RelativePath -PassThru $dir1Dirs $dir2Dirs |
Where-Object SideIndicator -eq '=>'
# Output the results.
if ($diff) {
Write-Host "Files that are different:"
$diff | Select-Object -ExpandProperty FullName
} else {
Write-Host "No differences found."
}
I do have the same issue there is nothing out there really :/.
There is a test realease of v2 https://www.npmjs.com/package/@jadkins89/next-cache-handler. But it's wip code. So I am not sure if I would use it for a real project.
Kind of concerning that there are not more alternatives :/. Is everybody else sticking to 14 or not using a external cache?
click the image and check the highlighted section... you can figure it out.
We have the same issue, when we try to update our old logging lib. The link is no longer available, do you remember what solves your issue?
Bro can you provide the structure of database
Have you already found a solution for this problem?
I have the same problem and I can only get it two work when I run the code under a compute recourse with the access mode set to "no isolation shared"
it is not work when the navigationcontroller has more than one child viewcontroller,then push UIHostviewController,right?
Did you ever manage to work this work, whilst mine seems ok, the output of the tests are not being generated into the CSV file? I've a load of tests created, but no way to actually view the outcome.
Thanks
Matt
Here is a sample code which implements pip with agora rtc sdk
i am also trying to get this kind of api but i am not getting it
thats why i started using selenium web srcapping and getting the result
but now google has detected the bot and now i am unable to get the result could you please share you approach which you are using?
jkcbfewiuklfnbhjkohbgvukimjhjnhjknkmjn
At first, I suspected it might be related to an access token issue, so I updated the configuration to use a global token. However, that did not resolve the problem. Can anyone help me with this or suggest a workaround?
I have downloaded .exe but when I extract this I don't see NQjc.jar in NetSuite JDBC Drivers folder but we have other .exe and .dll and .cer, .txt files, please help me in proper installation to get NQjc.jar
I found why
there is an attribute call count in my model.
maybe it call count attribute rather than as count in sql.
this is a stupid question lol.
thanks everyone.
Kalau lo cari situs slot yang gak zonk, wajib coba igplay . Banyak yang udah cuan!
Made simple adjustments that is the input can be negative value and user have to input the value accordingly increment or deceremnt Not the best and scaleabel approach but does the work Can you guyz please suggest me a better one please
@paulo can u gimme me examples i also face the same issues when to delete image, becuase sometimes using retention rules image alredy deleted but still needed in the k8s when need to rollback apps version to the earlier that image alredy deleted by retention rules, basicly i only want to delete all image that none in the list k8s not exists, get all image in k8s if not exstis delete all
I am looking for a solution for a similar problem... i have an excel sheet with 100 rows each containing a unique word .. and I have a pdf file which contains 1000s of sentences and those words.. is there any way where i can just upload the excel file and pdf reader takes one word at a time searches for it through the pdf and once all the words are searched for ... returns to me a pdf with all those words i am looking for in highlighted text
you can refer to this link https://support.huaweicloud.com/intl/zh-cn/basics-terraform/terraform_0021.html , teach you how to config backend. Can google translate to english
You will need a converter to jwt , please check https://medium.com/@wirelesser/oauth2-write-a-resource-server-with-keycloak-and-spring-security-c447bbca363c
You can use @dynamic decorator. https://www.union.ai/docs/flyte/user-guide/core-concepts/workflows/dynamic-workflows/
Finally I find my way to make it happen, so I'm here to put my solution, is case someone is facing the same problem as me.
Because we need to send some special headers to Azure service when create the websoket connection, so we need a proxy server (native Websocket in browser cannot send coustom headers).
server.ts
:
import http from "http";
import * as WebSocket from "ws";
import crypto from "crypto";
import fs from "fs";
import path from "path";
// Azure tts
const URL =
"wss://<your_azure_service_origin>.tts.speech.microsoft.com/cognitiveservices/websocket/v2";
const KEY = "your_azure_service_key";
const server = http.createServer((req, res) => {
res.end("Server is Running");
});
server.on("upgrade", (req, socket, head) => {
const remote = new WebSocket.WebSocket(URL, {
headers: {
"ocp-apim-subscription-key": KEY,
"x-connectionid": crypto.randomUUID().replace(/-/g, ""),
},
});
remote.on("open", () => {
console.log("remote open");
const requestId = crypto.randomUUID().replace(/-/g, "");
const now = new Date().toISOString();
// send speech.config
remote.send(
[
`X-Timestamp:${now}`,
"Path:speech.config",
"",
`${JSON.stringify({})}`,
].join("\r\n"),
);
// send synthesis.context
remote.send(
[
`X-Timestamp:${now}`,
"Path:synthesis.context",
`X-RequestId:${requestId}`,
"",
`${JSON.stringify({
synthesis: {
audio: {
// outputFormat: "audio-16khz-32kbitrate-mono-mp3",
outputFormat: "raw-16khz-16bit-mono-pcm",
metadataOptions: {
visemeEnabled: false,
bookmarkEnabled: false,
wordBoundaryEnabled: false,
punctuationBoundaryEnabled: false,
sentenceBoundaryEnabled: false,
sessionEndEnabled: true,
},
},
language: { autoDetection: false },
input: {
bidirectionalStreamingMode: true,
voiceName: "zh-CN-YunxiNeural",
language: "",
},
},
})}`,
].join("\r\n"),
);
const client = new WebSocket.WebSocketServer({ noServer: true });
client.handleUpgrade(req, socket, head, (clientWs) => {
clientWs.on("message", (data: Buffer) => {
const json = JSON.parse(data.toString("utf8")) as {
type: "data" | "end";
data?: string;
};
console.log("Client:", json);
remote.send(
[
`X-Timestamp:${new Date().toISOString()}`,
`Path:text.${json.type === "data" ? "piece" : "end"}`,
"Content-Type:text/plain",
`X-RequestId:${requestId}`,
"", // empty line
json.data || "",
].join("\r\n"),
);
});
const file = createWAVFile(`speech/${Date.now()}.wav`);
remote.on("message", (data: Buffer, isBinary) => {
// console.log("Remote, isBinary:", isBinary);
const { headers, content } = parseChunk(data);
console.log({ headers });
if (isBinary) {
if (headers.Path === "audio") {
// why we need to skip the first byte
const audioContent = content.subarray(1);
if (audioContent.length) {
file.write(audioContent);
clientWs.send(audioContent);
}
}
} else if (headers.Path === "turn.end") {
file.end();
}
});
clientWs.on("close", () => {
console.log("client close");
remote.close();
});
clientWs.on("error", (error) => {
console.log("client error", error);
});
});
remote.on("close", (code, reason) => {
console.log("remote close", reason.toString());
});
remote.on("error", (error) => {
console.log("remote error", error);
});
});
});
function parseChunk(buffer: Buffer) {
const len = buffer.length;
const headers: string[][] = [];
// skip first bytes
//? what means the first bytes?
let i = 2;
let temp: number[] = [];
let curr: string[] = [];
let contentPosition: number;
for (; i < len; i++) {
if (buffer[i] === 0x3a) {
// :
curr.push(Buffer.from(temp).toString());
temp = [];
} else if (buffer[i] === 0x0d && buffer[i + 1] === 0x0a) {
// \r\n
// maybe empty line
if (temp.length) {
curr.push(Buffer.from(temp).toString());
temp = [];
headers.push(curr);
curr = [];
}
i += 1; // skip \n
contentPosition = i;
if (headers.at(-1)?.[0] === "Path") {
// if we get `Path`
break;
}
} else {
temp.push(buffer[i]);
}
}
const obj: Record<string, string> = {};
for (const [key, value] of headers) {
obj[key] = value;
}
const content = buffer.subarray(contentPosition!);
return { headers: obj, content };
}
// for test
function createWAVFile(
filename: string,
sampleRate = 16000,
bitDepth = 16,
channels = 1,
) {
let dataLength = 0;
let data = Buffer.alloc(0);
return {
write(chunk: Buffer) {
dataLength += chunk.length;
data = Buffer.concat([data, chunk]);
},
end() {
const byteRate = sampleRate * (bitDepth / 8) * channels;
const blockAlign = (bitDepth / 8) * channels;
// WAV head
const buffer = Buffer.alloc(44);
buffer.write("RIFF", 0); // ChunkID
buffer.writeUInt32LE(36 + dataLength, 4); // ChunkSize
buffer.write("WAVE", 8); // Format
buffer.write("fmt ", 12); // Subchunk1ID
buffer.writeUInt32LE(16, 16); // Subchunk1Size (16 for PCM)
buffer.writeUInt16LE(1, 20); // AudioFormat (1 = PCM)
buffer.writeUInt16LE(channels, 22); // Channels
buffer.writeUInt32LE(sampleRate, 24); // SampleRate
buffer.writeUInt32LE(byteRate, 28); // ByteRate
buffer.writeUInt16LE(blockAlign, 32); // BlockAlign
buffer.writeUInt16LE(bitDepth, 34); // BitsPerSample
buffer.write("data", 36); // Subchunk2ID
buffer.writeUInt32LE(dataLength, 40); // Subchunk2Size
const stream = fs.createWriteStream(filename);
stream.write(buffer);
stream.write(data);
stream.end();
console.log(`write to file ${filename}`);
},
};
}
server.listen(8080);
player.ts
:
type StreamingAudioPlayerOptions = {
autoPlay: boolean;
};
export class StreamingAudioPlayer {
private context = new AudioContext();
private chunks: Blob[] = [];
private decodeChunkIndex = 0;
private buffers: AudioBuffer[] = [];
private duration = 0;
private decoding = false;
private scheduleIndex = 0;
private currentDuration = 0; // 粗略记录已播放时长,用于展示,不可用于播放控制
private state: "play" | "stop" = "stop";
private isPlaying = false; // 是否真的在播放
// 跟踪下一个缓冲区的预定播放时间
private nextScheduledTime = 0;
// 跟踪已创建的音频源
private activeSources: AudioBufferSourceNode[] = [];
private sourceSchedule = new WeakMap<AudioBufferSourceNode, [number]>();
private beginOffset = 0;
private timer: number | null;
constructor(private readonly options: StreamingAudioPlayerOptions) {}
private async decodeAudioChunks() {
if (this.decoding || this.chunks.length === 0) {
return;
}
this.decoding = true;
while (this.decodeChunkIndex < this.chunks.length) {
const originBuffer =
await this.chunks[this.decodeChunkIndex].arrayBuffer();
// Step 1: 转成 Int16
const int16 = new Int16Array(originBuffer);
// Step 2: 转成 Float32
const float32 = new Float32Array(int16.length);
for (let i = 0; i < int16.length; i++) {
float32[i] = int16[i] / 32768; // Normalize to [-1.0, 1.0]
}
// Step 3: 创建 AudioBuffer (单声道)
const audioBuffer = this.context.createBuffer(
1, // mono
float32.length,
16000, // sampleRate
);
audioBuffer.copyToChannel(float32, 0);
this.buffers.push(audioBuffer);
this.duration += audioBuffer.duration;
console.log(
`chunk ${this.decodeChunkIndex} decoded, total buffer duration: ${this.duration}`,
);
this.decodeChunkIndex++;
if (this.state === "play" && !this.isPlaying) {
console.log("ready to play");
this._play();
} else if (this.state === "stop" && this.options.autoPlay) {
this.play();
}
}
this.decoding = false;
}
async append(chunk: Blob) {
this.chunks.push(chunk);
if (!this.decoding) {
this.decodeAudioChunks();
}
}
private scheduleBuffers() {
while (this.scheduleIndex < this.buffers.length) {
if (this.nextScheduledTime - this.context.currentTime > 10) {
// 缓冲控制在 10s 左右
break;
}
const buffer = this.buffers[this.scheduleIndex];
const source = this.context.createBufferSource();
source.buffer = buffer;
// 记录并更新预定时间
const startTime = this.nextScheduledTime;
this.nextScheduledTime += buffer.duration;
source.connect(this.context.destination);
if (this.beginOffset !== 0) {
source.start(startTime, this.beginOffset);
this.beginOffset = 0;
} else {
source.start(startTime);
}
this.sourceSchedule.set(source, [startTime]);
console.log(`schedule chunk ${this.scheduleIndex}`);
this.activeSources.push(source);
const index = this.scheduleIndex;
this.scheduleIndex++;
// 监听播放结束来维护状态
source.addEventListener("ended", () => {
// 移除已结束的源
this.activeSources = this.activeSources.filter((s) => s !== source);
if (this.state !== "play") {
return;
}
console.log(`chunk ${index} play finish`);
if (this.scheduleIndex < this.buffers.length) {
// 继续安排未播放的切片
this.scheduleBuffers();
} else if (this.activeSources.length === 0) {
// 如果没有剩余的播放源,那么停止播放
this._stop();
}
});
}
}
private _play() {
// 使用计时器粗略记录已播放时长
// ?播放卡住了怎么办
const updatePlayDuration = (timestamp1: number) => {
return (timestamp2: number) => {
this.currentDuration += timestamp2 - timestamp1;
this.timer = requestAnimationFrame(updatePlayDuration(timestamp2));
};
};
this.timer = requestAnimationFrame(updatePlayDuration(performance.now()));
// 初始化播放时间为当前上下文时间
this.nextScheduledTime = this.context.currentTime;
this.isPlaying = true;
this.scheduleBuffers();
}
private _stop() {
if (this.state !== "play") {
return;
}
// 停止所有活跃的音频源
this.activeSources.forEach((source, index) => {
if (index === 0) {
// current playing source
const offset =
this.context.currentTime - this.sourceSchedule.get(source)![0];
console.log("offset:", offset);
}
source.stop();
});
cancelAnimationFrame(this.timer!);
this.timer = null;
this.activeSources = [];
// 不确定是否加载了全部的音频切片
this.state = "stop";
this.isPlaying = false;
console.log(`played duration: ${this.currentDuration}`);
}
resume() {
// 恢复播放应该依据已播放时间
// 因为已播放时间可以通过时间轴(暂未实现)调整
this.scheduleIndex = 0;
let d = 0;
for (; this.scheduleIndex < this.buffers.length; this.scheduleIndex++) {
const buffer = this.buffers[this.scheduleIndex];
if (d + buffer.duration * 1000 > this.currentDuration) {
break;
}
d += buffer.duration * 1000;
}
this.state = "play";
this.beginOffset = (this.currentDuration - d) / 1000;
console.log("resume offset", this.beginOffset);
this._play();
}
play() {
if (this.state === "play") {
return;
}
this.state = "play";
this.duration = this.buffers.reduce((total, buffer) => {
return total + buffer.duration;
}, 0);
if (this.duration === 0) {
console.warn("waiting buffer");
return;
}
this.currentDuration = 0;
this.scheduleIndex = 0;
console.log(this);
this._play();
}
pause() {
this._stop();
}
}
index.js
:
// something like:
const player = new StreamingAudioPlayer({ autoPlay: true });
const ws = new Websocket("xxx");
ws.send('{"type":"data","data":"hello"}');
ws.send('{"type":"data","data":" world"}');
ws.send('{"type":"end"}');
ws.addEventListener("message", (e) => {
player.append(e.data as Blob);
});
The code is for reference only. If anyone has any better suggestions, please feel free to share your thoughts.
This is the workaround that worked for me! https://docs.hetrixtools.com/microsoft-teams-how-to-remove-name-used-a-workflow-template-to-send-this-card-get-template/
How do i add a column to indicate whether the mismatch appears in df1 or df2?
Instead of @PostConstruct, listen to the ApplicationReadyEvvent
this is built in to phpunit now: https://github.com/sebastianbergmann/phpunit/pull/6118
x gxvcx vxcxc xcdsf sdfsdgsd gdsg sdgsdgsdg sdgsdgs
please help me,
Thank you so much. This fixed my problem!
Any instruction on how to do it? For example, how to extract application from card and reinstall it but with different AID value
have you find a solution yet? I've tried cleaning node modules, pods and reinstalling but nothing works
This link explain how install odoo + postgree on alpine linux
i need help with this error ExternalError: TypeError: Cannot read properties of undefined (reading 'tp$mro') on line 5 in dacoolthing.py
this is the code:
from turtle import *
class char(Turtle):
def __init__(self):
super().__init__()
self.penup()
self.shape("turtle")
self.goto(0,0)
self.speed(0)
def attack():
print()
I am having the same problem, was there any resolution?
I tried manually modifying the requirements.txt file to add libglib2.0, libnss3, libgconf, and libfontconfig1 as shown on this other thread, but it didn't seem to have any effect.
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 127
Also tried connecting with SSH to pip install selenium directly in hopes the chrome driver dependencies would get updated.
Any luck on this? I’m having the same issue.
Did you ever found the solution to this?
Im in exactly the same spot right now. I had to change the port to 8082 as well, though my issue lies with grafana. The metrics are not being output to prometheus for some reason :(
There is a recipe hosted in the meta-python-ai layer https://layers.openembedded.org/layerindex/recipe/403973/
can you share your code where we can use that and replicate the issue on our end as this way its immpossible t understand what is causing it or is your website hosted online? if yes, then share the url
thanks your method sort help me a lot to implement algorithm bubble sort in data structure linkedlist
Great question and great answer! https://stackoverflow.com/users/12109788/jpsmith
I have added more variables in c() and I can't get it to sum the medians etc for each variable. Instead I get a list per hour from the first value. How can I fix this?
library(dplyr)
library(chron)
library(gtsummary)
chrontest <- chestdf %>%
select(tts_sec, ttl_sec, ttprov1_sec, deltatrop_sec, vistelse_sec) %>%
drop_na() %>%
mutate(across(ends_with("_sec"), ~ format(as.POSIXct(.), "%H:%M:%S"))) %>%
mutate(across(ends_with("_sec"), ~ chron::times(.)))
summary_table <- chrontest %>%
tbl_summary(
include = c("tts_sec", "ttl_sec", "ttprov1_sec", "deltatrop_sec", "vistelse_sec"),
label = list(
tts_sec ~ "Tid till S",
ttl_sec ~ "Tid till L",
ttprov1_sec ~ "Tid till provtagn 1",
deltatrop_sec ~ "Tid till provtagn 2",
vistelse_sec ~ "Vistelsetid"
),
type = list(
all_continuous() ~ "continuous2"
),
statistic = list(
all_continuous() ~ c(
"{mean}",
"{median} ({p25}, {p75})",
"{min}, {max}"
)
),
digits = list(
all_continuous() ~ 2
)
)
I'll be the first to admit that it may not work for everyone or in every use case, but it works for what I intended.
Since it's been a while since posting the question, naturally a good bit has changed in my implementation of Sanity, but you shouldn't have any issues adapting, to your own project with minor changes.
I'd like to start by addressing the changes I've made since posting the question. Please keep in mind all changes listed here were created with Next.js 15 and—more specifically—the next/image
component in mind. You may need to make modifications if this does not apply to you.
I no longer use the imageUrlFor
, compressWidthAndHeight
, or prepareImage
functions to generate src
attribute and other image props. Instead I take advantage of the GROQ query step by pulling in the information I need and creating the src
at this level. I created a helper function for querying images with GROQ, since there are many different scenarios that require different functions on the src
.
If you're using TypeScript like I do, here's the definitions you'll need:
export type SanityCrop = {
top: number
left: number
bottom: number
right: number
}
export type SanityHotspot = {
x: number
y: number
width: number
height: number
}
export type SanityImage = {
_id: string
alt?: string
aspectRatio?: number
blurDataURL: string
crop?: SanityCrop
height?: number
hotspot?: SanityHotspot
filename?: string
src: string
width?: number
}
All descriptions in the GroqImageSourceOptions
type are copied from Sanity – Image transformations – Image URLs. You're welcome to use this in your own projects if you want.
type GroqImageSourceOptions = Partial<{
/** Automatically returns an image in the most optimized format supported by the browser as determined by its Accept header. To achieve the same result in a non-browser context, use the `fm` parameter instead to specify the desired format. */
auto: 'format'
/** Hexadecimal code (RGB, ARGB, RRGGBB, AARRGGBB) */
bg: string
/** `0`-`2000` */
blur: number
/** Use with `fit: 'crop'` to specify how cropping is performed.
*
* `focalpoint` will crop around the focal point specified using the `fp` parameter.
*
* `entropy` attempts to preserve the "most important" part of the image by selecting the crop that preserves the most complex part of the image.
* */
crop:
| 'top'
| 'bottom'
| 'left'
| 'right'
| 'top,left'
| 'top,right'
| 'bottom,left'
| 'bottom,right'
| 'center'
| 'focalpoint'
| 'entropy'
/** Configures the headers so that opening this link causes the browser to download the image rather than showing it. The browser will suggest to use the file name provided here. */
dl: string
/** Specifies device pixel ratio scaling factor. From `1` to `3`. */
dpr: 1 | 2 | 3
/** Affects how the image is handled when you specify target dimensions.
*
* `clip` resizes to fit within the bounds you specified without cropping or distorting the image.
*
* `crop` crops the image to fill the size you specified when you specify both `w` and `h`.
*
* `fill` operates the same as `clip`, but any free area not covered by your image is filled with the color specified in the `bg` parameter.
*
* `fillmax` places the image within the box you specify, never scaling the image up. If there is excess room in the image, it is filled with the color specified in the `bg` parameter.
*
* `max` fits the image within the box you specify, but never scaling the image up.
*
* `min` resizes and crops the image to match the aspect ratio of the requested width and height. Will not exceed the original width and height of the image.
*
* `scale` scales the image to fit the constraining dimensions exactly. The resulting image will fill the dimensions, and will not maintain the aspect ratio of the input image.
*/
fit: 'clip' | 'crop' | 'fill' | 'fillmax' | 'max' | 'min' | 'scale'
/** Flip image horizontally, vertically or both. */
flip: 'h' | 'v' | 'hv'
/** Convert image to jpg, pjpg, png, or webp. */
fm: 'jpg' | 'pjpg' | 'png' | 'webp'
/** Specify a center point to focus on when cropping the image. Values from 0.0 to 1.0 in fractions of the image dimensions. */
fp: {
x: number
y: number
}
/** The frame of an animated image. The only valid value is 1, which is the first frame. */
frame: 1
/** Height of the image in pixels. Scales the image to be that tall. */
h: number
/** Invert the colors of the image. */
invert: boolean
/** Maximum height. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
maxH: number
/** Maximum width in the context of image cropping. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
maxW: number
/** Minimum height. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
minH: number
/** Minimum width. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
minW: number
/** Rotate the image in 90 degree increments. */
or: 0 | 90 | 180 | 270
/** The number of pixels to pad the image. Applies to both width and height. */
pad: number
/** Quality `0`-`100`. Specify the compression quality (where applicable). Defaults are `75` for JPG and WebP. */
q: number
/** Crop the image according to the provided coordinate values. */
rect: {
left: number
top: number
width: number
height: number
}
/** Currently the asset pipeline only supports `sat: -100`, which renders the image with grayscale colors. Support for more levels of saturation is planned for later. */
sat: -100
/** Sharpen `0`-`100` */
sharp: number
/** Width of the image in pixels. Scales the image to be that wide. */
w: number
}>
function applySourceOptions(src: string, options: GroqImageSourceOptions) {
const convertedOptions = Object.entries(options)
.map(
([key, value]) =>
`${breakCamelCase(key).join('-').toLowerCase()}=${typeof value === 'string' || typeof value === 'boolean' ? value : typeof value === 'number' ? Math.round(value) : Object.values(value).join(',')}`,
)
.join('&')
return src + ` + "?${convertedOptions}"`
}
type GroqImageProps = Partial<{
alt: boolean
/** Returns the aspect ratio of the image */
aspectRatio: boolean
/** Precedes asset->url */
assetPath: string
blurDataURL: boolean
/** Returns the coordinates of the crop */
crop: boolean
/** Returns the height of the image */
height: boolean
/** Returns the hotspot of the image */
hotspot: boolean
filename: boolean
otherProps: string[]
src: GroqImageSourceOptions
/** Returns the width of the image */
width: boolean
}>
/**
* # GROQ Image
*
* **Generates the necessary information for extracting the image asset, with built-in and typed options, making it easier to use GROQ's API as it relates to image fetching.**
*
* - Include `alt` and `blurDataURL` whenever possible.
*
* - It's best to always specify the `src` options as well.
*
* - Include either `srcset` or `sources` for best results.
*
* - `srcset` generates URLs for the `srcset` attribute of an `<img>` element.
*
* - `sources` generates URLs for `<source>` elements, used in the `<picture>` element.
*/
export function groqImage(props?: GroqImageProps) {
const prefix = props?.tabStart ? `\n${' '.repeat(props.tabStart)}` : '\n ',
assetPath = props?.assetPath ? `${props.assetPath}.` : ''
let constructor = `{`
if (props?.otherProps) constructor = constructor + prefix + props.otherProps.join(`,${prefix}`) + `,`
if (props?.alt) constructor = constructor + prefix + `"alt": ${assetPath}asset->altText,`
if (props?.crop) {
let crop = 'crop,'
if (props.assetPath) crop = `"crop": ${assetPath}crop,`
constructor = constructor + prefix + crop
}
if (props?.hotspot) {
let hotspot = 'hotspot,'
if (props.assetPath) hotspot = `"hotspot": ${assetPath}hotspot,`
constructor = constructor + prefix + hotspot
}
if (props?.width) constructor = constructor + prefix + `"width": ${assetPath}asset->metadata.dimensions.width,`
if (props?.height) constructor = constructor + prefix + `"height": ${assetPath}asset->metadata.dimensions.height,`
if (props?.aspectRatio)
constructor = constructor + prefix + `"aspectRatio": ${assetPath}asset->metadata.dimensions.aspectRatio,`
if (props?.blurDataURL) constructor = constructor + prefix + `"blurDataURL": ${assetPath}asset->metadata.lqip,`
if (props?.filename) constructor = constructor + prefix + `"filename": ${assetPath}asset->originalFilename,`
constructor = constructor + prefix + `"src": ${assetPath}asset->url`
if (props?.src && Object.entries(props.src).length >= 1) constructor = applySourceOptions(constructor, props.src)
return constructor
}
Although most props are now prepared with groqImage
—like the alt
and blurDataURL
for next/image
—the crop, hotspot, width, and height still aren't utilized. To utilize I created a couple helper functions that are implemented into the main getImagePropsFromSanityForSizing
function.
applyCropToImageSource
calculates the rect
search parameter of the Sanity image URL to apply the crop
based on the image's dimensions.
applyHotspotToImageSource
uses the x
and y
values of the hotspot
for the fx
and fy
focal points defined in the search parameters. It also makes sure the crop
search parameter is set to focalpoint
.
getImagePropsForSizingFromSanity
applies both previously mentioned functions to the src
and calculates the maximum width
and height
attributes based on the actual dimensions of the image in Sanity, compared to the developer-defined max dimensions. If no max width and height are provided, the width
and height
props remain undefined. This is intentional, so that the fill
prop can be properly utilized.
export function applyCropToImageSource(src: string, crop?: SanityCrop, width?: number, height?: number) {
if (!crop || !width || !height) return src
const { top, left, bottom, right } = crop
const croppedWidth = width - right * width,
croppedHeight = height - bottom * height
const rect = `&rect=${Math.round(left)},${Math.round(top)},${Math.round(croppedWidth)},${Math.round(croppedHeight)}`
return src + rect
}
export function applyHotspotToImageSource(src: string, hotspotCoords?: Pick<SanityHotspot, 'x' | 'y'>) {
if (!hotspotCoords) return src
const { x, y } = hotspotCoords
const fx = `&fx=${x}`,
fy = `&fy=${y}`
if (src.includes('&crop=') && !src.includes('&crop=focalpoint')) {
src = src.replace(
/&crop=(top|bottom|left|right|top,left|top,right|bottom,left|bottom,right|center|entropy)/,
'&crop=focalpoint',
)
} else {
src = src + `&crop=focalpoint`
}
if (!Number.isNaN(x) && x <= 1 && x >= 0) src = src + fx
if (!Number.isNaN(y) && y <= 1 && y >= 0) src = src + fy
return src
}
/**
* # Get Image Props for Sizing from Sanity
*
* - Returns src, height, and width for `next/image` component
* - Both sanity and max heights and widths must be included to include height and width props
* - The src will have focalpoints and cropping applied to it, according to the provided crop, hotspot, and dimensions.
*/
export function getImagePropsForSizingFromSanity(
src: string,
{
crop,
height,
hotspot,
width,
}: Partial<{
crop: SanityCrop
height: Partial<{ sanity: number; max: number }>
hotspot: SanityHotspot
width: Partial<{ sanity: number; max: number }>
}>,
): Pick<ImageProps, 'src' | 'height' | 'width'> {
return {
src: applyHotspotToImageSource(applyCropToImageSource(src, crop, width?.sanity, height?.sanity), hotspot),
height: height?.max ? Math.min(height.sanity || Infinity, height.max) : undefined,
width: width?.max ? Math.min(width.sanity || Infinity, width.max) : undefined,
}
}
And lastly, it should be noted that the next.config.ts
is modified to implement a custom loader to take advantage of Sanity's built image pipeline.
// next.config.ts
import type { NextConfig } from 'next'
const nextConfig: NextConfig = {
images: {
formats: ['image/webp'],
loader: 'custom',
loaderFile: './utils/sanity-image-loader.ts',
remotePatterns: [
{
protocol: 'https',
hostname: 'cdn.sanity.io',
pathname: '/images/[project_id]/[dataset]/**',
port: '',
},
],
},
}
export default nextConfig
// sanity-image-loader.ts
// * Image
import { ImageLoaderProps } from 'next/image'
export default function imageLoader({ src, width, quality }: ImageLoaderProps) {
if (src.includes('cdn.sanity.io')) {
const url = new URL(src)
const maxW = Number(url.searchParams.get('max-w'))
url.searchParams.set('w', `${!maxW || width < maxW ? width : maxW}`)
if (quality) url.searchParams.set('q', `${quality}`)
return url.toString()
}
return src
}
Now that we got the boring stuff out of the way, let's talk about how implementation of the hotspot actually works.
The hotspot object is defined like this (in TypeScript):
type SanityHotspot = {
x: number
y: number
width: number
height: number
}
All of these values are numbers 0-1, which means multiplying each value by 100 and adding a %
at the end, will generally be how we will implement the values.
x
and y
are the center of the hotspot. width
and height
are fractions of the dimensions of the image.
Now there are certainly different ways of using these values to get the results you're looking for (e.g. top, left, and/or translate), but I wanted to use the object-position
CSS property, since it doesn't require wrapping the <img>
element in a <div>
and it works well with object-fit: cover;
.
The most important thing to dynamically position the image to keep the hotspot in view is handling resize events. Since I'm using Next.js, I created a React hook to handle this.
I made this hook to return the dimensions of either the specified element, or the window, so it can be used for anything. In our use case, the dimensions of the image is all we care about.
'use client'
import { RefObject, useEffect, useState } from 'react'
export function useResize(el?: RefObject<HTMLElement | null> | HTMLElement) {
const [dimensions, setDimensions] = useState({ width: 0, height: 0 })
const handleResize = () => {
const trackedElement = el ? ('current' in el ? el.current : el) : null
setDimensions({
width: trackedElement ? trackedElement.clientWidth : window.innerWidth,
height: trackedElement ? trackedElement.clientHeight : window.innerWidth,
})
}
useEffect(() => {
if (typeof window !== 'undefined') {
handleResize()
window.addEventListener('resize', handleResize)
}
return () => {
window.removeEventListener('resize', handleResize)
}
}, [])
return dimensions
}
Now that we have our useResize
hook, we can use it and apply the object-position
to dynamically position the image to keep the hotspot in view. Naturally, we'll want to create a new component, so it can be used easily when we need it.
This image component is built off of the next/image
component, since we still want to take advantage of all that that component has to offer.
'use client'
// * Types
import { SanityHotspot } from '@/typings/sanity'
export type ImgProps = ImageProps & { hotspotPositioning?: { aspectRatio?: number; hotspot?: SanityHotspot } }
// * React
import { RefObject, useEffect, useRef, useState } from 'react'
// * Hooks
import { useResize } from '@/hooks/use-resize'
// * Components
import Image, { ImageProps } from 'next/image'
export default function Img({ hotspotPositioning, style, ...props }: ImgProps) {
const imageRef = useRef<HTMLImageElement>(null),
{ objectPosition } = useHotspot({ ...hotspotPositioning, imageRef })
return <Image {...props} ref={imageRef} style={{ ...style, objectPosition }} />
}
Thankfully that part was really simple. I'm sure you noticed we still need to implement this useHotspot
hook that returns the objectPosition
property. First I just wanted to address the changes we made to the ImageProps
from next/image
.
We added a single property to make it as easy as possible to use. The hotspotPositioning
prop optionally accepts both the aspectRatio
and the hotspot
. Both of these are easily pulled in using the groqImage
function.
{ hotspotPositioning?: {
aspectRatio?: number
hotspot?: SanityHotspot
} }
Pitfall
It is possible that the aspectRatio
will not be available if you aren't using the Media plugin for Sanity.
If you do not provide both of these, the hotspot will not be dynamically applied.
Okay—the tough part. How exactly does the useHotspot
hook calculate the coordinates of the objectPosition
property?
By using a useEffect
hook, we are able to update the objectPosition
useState
each time the width
and/or height
of the <img>
element changes. Before actually running any calculations, we always check whether the hotspot
and aspectRatio
are provided, so—although if you know you don't need to dynamically position the hotspot, you shouldn't use this component—it shouldn't hurt performance if you don't have either of those.
The containerAspectRatio
is the aspect ratio of the part of the image that is actually visible. By comparing this to the aspectRatio
, which is the full image, we can know which sides the image is being cropped on by the container.
By default we use the x
and y
coordinates of the hotspot for the objectPosition
, in the case the hotspot isn't being cutoff at all..
Regardless of whether the image is being cropped vertically or horizontally the calculation is basically the same. First, it calculates the aspect ratio of the visible area and it uses the result to determine how far off the overflow is on both sides, in a decimal format (0-1). Next, it calculates how far off—if at all—the hotspot bound overflow. By comparing each respective side's overflow to its hotspot overflowing side counterpart, we are able to determine what direction the objectPosition
needs to move.
It's important to note that objectPosition
does not move the image the same way using top
, left
, or translate
does. Where positive values move the image down and/or right and negative values move the image up and/or left, objectPosition
moves the image within its containing dimensions. This means—assuming we start at 50% 50%
—making the value lower moves the image right or down respectively, and making the value higher moves the image left or up respectively. This is an inverse from the other positioning properties, and objectPosition
doesn't use negative values (at least not for how we want to use it). This is why the calculations are {x or y} ± ({total overflow amount} - {hotspot overflow amount})
.
Lastly, we have the situation where two sides are overflowing. In this case we want to balance how much each side is overflowing to find a middle ground. This is simply 2 * {x or y} - 0.5
.
Once calculations are made, we convert the numbers to a percentage with a min max statement to make sure it never gets inset.
function useHotspot({
aspectRatio,
hotspot,
imageRef,
}: {
aspectRatio?: number
hotspot?: SanityHotspot
imageRef?: RefObject<HTMLImageElement | null>
}) {
const [objectPosition, setObjectPosition] = useState('50% 50%'),
{ width, height } = useResize(imageRef)
useEffect(() => {
if (hotspot && aspectRatio) {
const containerAspectRatio = width / height
const { height: hotspotHeight, width: hotspotWidth, x, y } = hotspot
let positionX = x,
positionY = y
if (containerAspectRatio > aspectRatio) {
// Container is wider than the image (proportionally)
// Image will be fully visible horizontally, but cropped vertically
// Calculate visible height ratio (what portion of the image height is visible)
const visibleHeightRatio = aspectRatio / containerAspectRatio
// Calculate the visible vertical bounds (in normalized coordinates 0-1)
const visibleTop = 0.5 - visibleHeightRatio / 2,
visibleBottom = 0.5 + visibleHeightRatio / 2
const hotspotTop = y - hotspotHeight / 2,
hotspotBottom = y + hotspotHeight / 2
// Hotspot extends above the visible area, shift it down
if (hotspotTop < visibleTop) positionY = y - (visibleTop - hotspotTop)
// Hotspot extends below the visible area, shift it up
if (hotspotBottom > visibleBottom) positionY = y + (hotspotBottom - visibleBottom)
// Hotspot extends above and below the visible area, center it vertically
if (hotspotTop < visibleTop && hotspotBottom > visibleBottom) positionY = 2 * y - 0.5
} else {
// Container is taller than the image (proportionally)
// Image will be fully visible vertically, but cropped horizontally
// Calculate visible width ratio (what portion of the image width is visible)
const visibleWidthRatio = containerAspectRatio / aspectRatio
// Calculate the visible horizontal bounds (in normalized coordinates 0-1)
const visibleLeft = 0.5 - visibleWidthRatio / 2,
visibleRight = 0.5 + visibleWidthRatio / 2
const hotspotLeft = x - hotspotWidth / 2,
hotspotRight = x + hotspotWidth / 2
// Hotspot extends to the left of the visible area, shift it right
if (hotspotLeft < visibleLeft) positionX = x - (visibleLeft - hotspotLeft)
// Hotspot extends to the right of the visible area, shift it left
if (hotspotRight > visibleRight) positionX = x + (hotspotRight - visibleRight)
// Hotspot extends beyond the visible area on both sides, center it
if (hotspotLeft < visibleLeft && hotspotRight > visibleRight) positionX = 2 * x - 0.5
}
positionX = Math.max(0, Math.min(1, positionX))
positionY = Math.max(0, Math.min(1, positionY))
setObjectPosition(`${positionX * 100}% ${positionY * 100}%`)
}
}, [aspectRatio, hotspot, width, height])
return { objectPosition }
}
'use client'
// * Types
import { SanityHotspot } from '@/typings/sanity'
export type ImgProps = ImageProps & { hotspotPositioning?: { aspectRatio?: number; hotspot?: SanityHotspot } }
// * React
import { RefObject, useEffect, useRef, useState } from 'react'
// * Hooks
import { useResize } from '@/hooks/use-resize'
// * Components
import Image, { ImageProps } from 'next/image'
function useHotspot({
aspectRatio,
hotspot,
imageRef,
}: {
aspectRatio?: number
hotspot?: SanityHotspot
imageRef?: RefObject<HTMLImageElement | null>
}) {
const [objectPosition, setObjectPosition] = useState('50% 50%'),
{ width, height } = useResize(imageRef)
useEffect(() => {
if (hotspot && aspectRatio) {
const containerAspectRatio = width / height
const { height: hotspotHeight, width: hotspotWidth, x, y } = hotspot
let positionX = x,
positionY = y
if (containerAspectRatio > aspectRatio) {
// Container is wider than the image (proportionally)
// Image will be fully visible horizontally, but cropped vertically
// Calculate visible height ratio (what portion of the image height is visible)
const visibleHeightRatio = aspectRatio / containerAspectRatio
// Calculate the visible vertical bounds (in normalized coordinates 0-1)
const visibleTop = 0.5 - visibleHeightRatio / 2,
visibleBottom = 0.5 + visibleHeightRatio / 2
const hotspotTop = y - hotspotHeight / 2,
hotspotBottom = y + hotspotHeight / 2
// Hotspot extends above the visible area, shift it down
if (hotspotTop < visibleTop) positionY = y - (visibleTop - hotspotTop)
// Hotspot extends below the visible area, shift it up
if (hotspotBottom > visibleBottom) positionY = y + (hotspotBottom - visibleBottom)
// Hotspot extends above and below the visible area, center it vertically
if (hotspotTop < visibleTop && hotspotBottom > visibleBottom) positionY = 2 * y - 0.5
} else {
// Container is taller than the image (proportionally)
// Image will be fully visible vertically, but cropped horizontally
// Calculate visible width ratio (what portion of the image width is visible)
const visibleWidthRatio = containerAspectRatio / aspectRatio
// Calculate the visible horizontal bounds (in normalized coordinates 0-1)
const visibleLeft = 0.5 - visibleWidthRatio / 2,
visibleRight = 0.5 + visibleWidthRatio / 2
const hotspotLeft = x - hotspotWidth / 2,
hotspotRight = x + hotspotWidth / 2
// Hotspot extends to the left of the visible area, shift it right
if (hotspotLeft < visibleLeft) positionX = x - (visibleLeft - hotspotLeft)
// Hotspot extends to the right of the visible area, shift it left
if (hotspotRight > visibleRight) positionX = x + (hotspotRight - visibleRight)
// Hotspot extends beyond the visible area on both sides, center it
if (hotspotLeft < visibleLeft && hotspotRight > visibleRight) positionX = 2 * x - 0.5
}
positionX = Math.max(0, Math.min(1, positionX))
positionY = Math.max(0, Math.min(1, positionY))
setObjectPosition(`${positionX * 100}% ${positionY * 100}%`)
}
}, [aspectRatio, hotspot, width, height])
return { objectPosition }
}
export default function Img({ hotspotPositioning, style, ...props }: ImgProps) {
const imageRef = useRef<HTMLImageElement>(null),
{ objectPosition } = useHotspot({ ...hotspotPositioning, imageRef })
return <Image {...props} ref={imageRef} style={{ ...style, objectPosition }} />
}
I hope this is helpful for people, as I have been trying to find a solid way to implement this for far too long. If this was helpful to you or you have any recommendations to make it better, please let me know!
please can you help me here I am also developing the application using dwr and spring 6, Java 17 but I'm getting exception engine.js isn't loading.
Getting exception as remote method is undefined my java methods isn't getting called
https://robbelroot.de/blog/csharp-bluetooth-example-searching-listing-devices/ , follow this link. This will help you
Kudos to @Pythoner! You saved my day. I was sure I tried everyting with API Keys lol
from gtts import gTTS
from pydub import AudioSegment
from pydub.playback import play
# Letra para convertir en audio (resumida y adaptada al estilo narrado tipo guía vocal)
lyrics = """
Una le di confianza, me enamoró y en su juego caí.
La segunda vino con lo mismo, me mintió, yo también le mentí.
Por eso es que ya no creo en el amor.
Gracias a todas esas heridas fue que yo aprendí...
Una conmigo jugó, y ahora con to’a yo juego.
En mi corazón no hay amor, no creo en sentimientos.
Soy un cabrón, se las pego a to’as.
Me tiro a esta, me tiro a la otra.
Mala mía, mai, es que me enzorra.
No quiero que más nadie me hable de amor, ya me cansé.
To’ esos trucos ya me los sé, esos dolores los pasé.
Quisiera que te sientas como yo me siento.
Quisiera cambiarle el final a este cuento.
Una conmigo quiso jugar, pues yo jugué con tres.
Una atrevida me quiso enchular, yo enchulé a las tres.
Y ahora no vuelvo a caer, me quedo con las putas y el poder.
Hoy te odio en secreto, si pudiera te devuelvo los besos.
Me arrepiento mil veces de haber confiado en ti.
Los chocolates y las flores, ahora son dolores.
Y después de la lluvia no hay colores.
Una conmigo jugó y ahora con todas yo juego.
En mi corazón no hay amor, tengo el alma en fuego.
Y no me hables de sentimientos, porque eso en mí ya está muerto.
"""
# Convertir texto a voz
tts = gTTS(lyrics, lang='es', slow=False)
audio_path = "/mnt/data/0_Sentimientos_GuiaVoz.mp3"
tts.save(audio_path)
audio_path
Thank you for the interesting information
я тоже изменил свой package.json на то что было указано в терминале после это написал в терминале npx expo i --fix и все заработало)
Qudos!
30 chracters, why?............
Another option for a maintained package for this use-case: https://packagist.org/packages/wikimedia/minify
Thanks for this discussion, I am trying to the same for my application but I have to do this for several images sequentially, So I tried the same but in a for loop, for eg:
for i_ in range(2):
fig, ax = plt.subplots()
# ax.add_artist(ab)
for row in range(1,30):
tolerance = 30 # points
ax.plot(np.arange(0,15,0.5),[i*row/i for i in range(1,15*2+1)], 'ro-', picker=tolerance, zorder=0)
fig.canvas.callbacks.connect('pick_event', on_pick)
klicker = clicker(ax, ["event"], markers=["x"], **{"linestyle": "--"})
plt.draw()
plt.savefig('add_picture_matplotlib_figure_{i_}.png',bbox_inches='tight')
plt.show()
But i get the click functionality only for the last image. How can i get it done for all the images?
what is the js in the first comment before the html
Agree with @Nguyen above- I had this error across Mac and PC, simply restarting the kernel in Jupyter fixed it in both cases.
There is a thread for this bug in Apple Developer Forums: https://developer.apple.com/forums/thread/778471
grep -E '[a-zA-Z]*[[:space:]]foo' <thefilename> | grep -v '?'
+1
I have the same issue (for a arm64 arch) and did not find a solution.
Happens for different IDEs (vscode, cursor, goland) so I assume the issue is with the go & dlv.
I also tried to install go with Homebrew, go website, and gvm. None solved the issue.
Damn it, after I posted it, I Just realized I'm using : , not =. Problem is solved.
For anyone that is searching for this with no luck. Here is the documentation from MS: Share-Types
I am also facing the same issue, and even when I try to install an older version of Swagger, I still face the same problem.
PS C:\Users\LENOVO\OneDrive\Desktop\practical-round> npm i @nestjs/[email protected]
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: @nestjs/[email protected]
npm ERR! node_modules/@nestjs/common
npm ERR! @nestjs/common@"^10.0.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer @nestjs/common@"^9.0.0" from @nestjs/[email protected]
npm ERR! node_modules/@nestjs/swagger
npm ERR! @nestjs/swagger@"6.3.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR!
npm ERR! For a full report see:
npm ERR! C:\Users\LENOVO\AppData\Local\npm-cache_logs\2025-04-08T14_34_57_230Z-eresolve-report.txt
npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\LENOVO\AppData\Local\npm-cache_logs\2025-04-08T14_34_57_230Z-debug-0.log
sql problem, see the solution here
I have the same problem, I couldn't solve it for about two months, but now I found a solution
Unfortunately this is the error I get when trying to run the same command. How are you able to build it? What version of the llvm project are you building?
((1,"c"), (23, "a"), (32,"b"))
same problem here (other tables) but the "kids-table" aren't filtered as expected.
I have tried several attempts and when I run sudo plank
, it works without any issues. However, when I run plank
normally (without sudo
), the problem occurs. Could anyone suggest what kind of permissions or adjustments are needed to make it work without running as root?
Thanks in advance for your help!
I know that each format has its own compression and I know that decompression is long and complicated.
But I would like to do the same thing using libraries that allow conversion to a same single format that is similar to .ppm.
any suggestions?
PS. trying .ppm, it stores RGB values as unsigned