I was experiencing this. The problem for me was that I had a container with whitespace: pre-wrap. Removing this solved the spacing issue.
I found why
there is an attribute call count in my model.
maybe it call count attribute rather than as count in sql.
this is a stupid question lol.
thanks everyone.
I reckon its better to use pure bash script together with a hotkey on which that script is bound. Its better than to learn another language.
Check whether domutils installed..If issue persists close the vs code and restart.sometimes this is an eslint bug, try closing VSCODE and opening it again
If your *.csproj file is not placed in the folder where you're running the dotnet build command, you're likely to face this error. You can navigate to your project folder by using the cd command, e.g., (assuming you're in your root directory where your solution file is located) run the following commands in order: cd CSharpBiggener.Game , dotnet build .
Why can't I login to Azure Database for Postgres, when I'm member of admin group
Important considerations when connecting as a Microsoft Entra group member:
Use the exact name of the Entra group you're trying to connect with — spelling and capitalization matter.
Don’t use a group member’s name or alias, just the group name itself.
If the group name has spaces, put a backslash () before each space to make it work.
The sign-in token you use is only good for 5 to 60 minutes, so it’s best to grab a fresh one right before you log in to the PostgreSQL database. Refer the below Link: https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication#use-a-token-as-a-password-for-signing-in-with-psql-or-pgadmin
i may have found a solution to the problem you're describing (without having to downgrade the httpx package):
https://github.com/Shakabrahh/youtube-search-python/commit/842d6e37f479c9c49234511f7980a69f4f2bbd3f
please keep in mind that this solution is not fully tested and that some other errors may occur.
you can override your Default JDK from File > New Project Setup > Setting for new project > Build, Execution, Deployment > Build Tools > Gradle and chagne the Default Gradle JDK.
FYI: https://issuetracker.google.com/issues/214428183#comment10
You can run
conda config --remove channels intel
also as additional note - intel channel based on conda-forge deps, so it work using it for resolving dependencies instead of anaconda
Kalau lo cari situs slot yang gak zonk, wajib coba igplay . Banyak yang udah cuan!
Made simple adjustments that is the input can be negative value and user have to input the value accordingly increment or deceremnt Not the best and scaleabel approach but does the work Can you guyz please suggest me a better one please
php
echo "success";//if success
javascript
if(data=="success") {
window.location.href = "target.html"
else
{
$('#form-messages").html(data)
}
🔍 Problems in your code: Incorrect Prime Check Logic:
You're looping for (i = 2; i <= a; i++), and inside that, you're modifying a (a++). That's the core issue — modifying the loop condition variable inside the loop can cause an infinite loop.
if(a % i != 0) is not sufficient to check primality — a number is prime only if it's not divisible by any number from 2 to √n.
You're logging prime or not prime for every iteration, which isn’t correct behavior
I don't want to change my case name. So I add a specific tag "Try Run" [Tags] Try Run and use robot --include "Try Run"
You can consider using the undocumented method on the prisma client called _createPrismaPromise which takes one argument, a function to wrap, and returns a PrismaPromise, which can then be used in prisma transactions.
added these lines of code in my handler.js to solve the issue.
const configPath = path.join(process.cwd(),'standalone/.next/required-server-files.json');
const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
const nextConfig = JSON.stringify(config?.config);
process.env.__NEXT_PRIVATE_STANDALONE_CONFIG = nextConfig;
There are multiple issues with the sql snippet you provided (which I ASSUME is going to be used in javascript during a mysql call?), given your desired result. As I understand it you would like to see a count of the rows that have a specific value for the 'code' column. Assuming you do not need the id returned as well, and only need the count:
SELECT
COUNT(DISTINCT CASE WHEN Code = 'S03' THEN ID END) AS CodeFound,
COUNT(DISTINCT CASE WHEN Code != 'S03' THEN ID END) AS NoCode
FROM A
That should be enough to return it to you. If you needed to check multiple codes at the same time then it would be a good idea to use 'in' or 'not in', but I would add that you will need parentheses around the value(s) such as
not in ('S03')
Shadyar's answer works, but just be really careful not to mess up /etc/passwd (like I did), or it's very hard to correct.
Another (?safer) way is to just append a "cd /your/target/starting/directory/" to the ~/.bashrc file. As suggested by Aigbe above.
@paulo can u gimme me examples i also face the same issues when to delete image, becuase sometimes using retention rules image alredy deleted but still needed in the k8s when need to rollback apps version to the earlier that image alredy deleted by retention rules, basicly i only want to delete all image that none in the list k8s not exists, get all image in k8s if not exstis delete all
I am looking for a solution for a similar problem... i have an excel sheet with 100 rows each containing a unique word .. and I have a pdf file which contains 1000s of sentences and those words.. is there any way where i can just upload the excel file and pdf reader takes one word at a time searches for it through the pdf and once all the words are searched for ... returns to me a pdf with all those words i am looking for in highlighted text
You are using scram 512 right ?
Scram doesn't require serviceName="kafka"; in jaas file.
You can also use below config in server.properties instead of separate jaas file.
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafkaadmin" password="kafkaadmin123456";
Ref : https://docs.confluent.io/platform/7.0/kafka/authentication_sasl/authentication_sasl_scram.html
I'd suggest using ComponentPropsWithRef or ComponentPropsWithoutRef as is or extends with other props if you need
import { ComponentPropsWithRef } from 'react'
...
ComponentPropsWithRef<ElementType>
Me pasa seguido cuando hay multiples señales con errores, colores gris o amarillo, revisa nodos que requieran las compuertas OR o terminales default mal nombradas
Yes, this can be done with SVG. Will provide more details soon.
you can refer to this link https://support.huaweicloud.com/intl/zh-cn/basics-terraform/terraform_0021.html , teach you how to config backend. Can google translate to english
Without seeing the rest of the code, I think the most likely problem is doing all of this before you have the screen initialized. After all, if the screen has not been created yet, where would you expect the input to appear? Here is how I would change it:
import turtle as trtl
troll = trtl.Turtle()
# Create the screen first
wn = trtl.Screen()
clr = input("give a color pls: ")
# properly use bgcolor as a function from turtle, not a standalone function
wn.bgcolor(clr)
wn.mainloop()
In the document https://www.w3.org/TR/fetch-metadata/ says:
To set the Sec-Fetch-Dest header for a request r:
Assert: r’s url is a potentially trustworthy URL.
And "potentially trustworthy URL" is defined here:
https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy
the items 3 and 4 on this document section says:
3. If origin’s scheme is either "https" or "wss", return "Potentially Trustworthy". ...
4. If origin’s host matches one of the CIDR notations 127.0.0.0/8 or ::1/128 [RFC4632], return "Potentially Trustworthy".
So, yes, apparently these headers are only sent if you are running from HTTPS or from localhost or any other special URL, but not from http
You're absolutely right to be cautious when things seem too easy. But, here is some reason why option 2 and 3 is bad options
Option 1 (some server like Apache/FastCGI) :
Option 2 (run builded in php server php -S localhost:8000) :
"Easier" – true. But there are gotchas.
Not production-ready: PHP’s built-in server is single-threaded and not intended for production use.
No HTTPS support, no access logging, no protection layers like mod_security.
It may hang or drop requests under load (even low load with SSR + multiple API hits).
Option 3 (use cli command form JS on server side like php index.php --someController='MyController') :
"Must be faster?" Maybe. But... here's the devil in the details.
You lose all the benefits of an HTTP server, No persistent processes, no connection pooling, no caching.
Harder to scale later (parallel CLI processes can spike memory/CPU).
Error handling is painful – think stderr, exit codes, etc.
That is my opinion about it, let's share about your thought.
If you want to make the grid items stretch to fit the screen you might want to use flexbox instead with grow and wrap. Grids make evenly sized spaces to put your items in so your content won't fill the screen if it only fills one grid space. Maybe look into col-span if you want to stretch to fit the screen while still using grid.
You will need a converter to jwt , please check https://medium.com/@wirelesser/oauth2-write-a-resource-server-with-keycloak-and-spring-security-c447bbca363c
This is work for me
PS C:\WINDOWS\system32>Set-Location "set_the_path"
You can use @dynamic decorator. https://www.union.ai/docs/flyte/user-guide/core-concepts/workflows/dynamic-workflows/
Finally I find my way to make it happen, so I'm here to put my solution, is case someone is facing the same problem as me.
Because we need to send some special headers to Azure service when create the websoket connection, so we need a proxy server (native Websocket in browser cannot send coustom headers).
server.ts:
import http from "http";
import * as WebSocket from "ws";
import crypto from "crypto";
import fs from "fs";
import path from "path";
// Azure tts
const URL =
"wss://<your_azure_service_origin>.tts.speech.microsoft.com/cognitiveservices/websocket/v2";
const KEY = "your_azure_service_key";
const server = http.createServer((req, res) => {
res.end("Server is Running");
});
server.on("upgrade", (req, socket, head) => {
const remote = new WebSocket.WebSocket(URL, {
headers: {
"ocp-apim-subscription-key": KEY,
"x-connectionid": crypto.randomUUID().replace(/-/g, ""),
},
});
remote.on("open", () => {
console.log("remote open");
const requestId = crypto.randomUUID().replace(/-/g, "");
const now = new Date().toISOString();
// send speech.config
remote.send(
[
`X-Timestamp:${now}`,
"Path:speech.config",
"",
`${JSON.stringify({})}`,
].join("\r\n"),
);
// send synthesis.context
remote.send(
[
`X-Timestamp:${now}`,
"Path:synthesis.context",
`X-RequestId:${requestId}`,
"",
`${JSON.stringify({
synthesis: {
audio: {
// outputFormat: "audio-16khz-32kbitrate-mono-mp3",
outputFormat: "raw-16khz-16bit-mono-pcm",
metadataOptions: {
visemeEnabled: false,
bookmarkEnabled: false,
wordBoundaryEnabled: false,
punctuationBoundaryEnabled: false,
sentenceBoundaryEnabled: false,
sessionEndEnabled: true,
},
},
language: { autoDetection: false },
input: {
bidirectionalStreamingMode: true,
voiceName: "zh-CN-YunxiNeural",
language: "",
},
},
})}`,
].join("\r\n"),
);
const client = new WebSocket.WebSocketServer({ noServer: true });
client.handleUpgrade(req, socket, head, (clientWs) => {
clientWs.on("message", (data: Buffer) => {
const json = JSON.parse(data.toString("utf8")) as {
type: "data" | "end";
data?: string;
};
console.log("Client:", json);
remote.send(
[
`X-Timestamp:${new Date().toISOString()}`,
`Path:text.${json.type === "data" ? "piece" : "end"}`,
"Content-Type:text/plain",
`X-RequestId:${requestId}`,
"", // empty line
json.data || "",
].join("\r\n"),
);
});
const file = createWAVFile(`speech/${Date.now()}.wav`);
remote.on("message", (data: Buffer, isBinary) => {
// console.log("Remote, isBinary:", isBinary);
const { headers, content } = parseChunk(data);
console.log({ headers });
if (isBinary) {
if (headers.Path === "audio") {
// why we need to skip the first byte
const audioContent = content.subarray(1);
if (audioContent.length) {
file.write(audioContent);
clientWs.send(audioContent);
}
}
} else if (headers.Path === "turn.end") {
file.end();
}
});
clientWs.on("close", () => {
console.log("client close");
remote.close();
});
clientWs.on("error", (error) => {
console.log("client error", error);
});
});
remote.on("close", (code, reason) => {
console.log("remote close", reason.toString());
});
remote.on("error", (error) => {
console.log("remote error", error);
});
});
});
function parseChunk(buffer: Buffer) {
const len = buffer.length;
const headers: string[][] = [];
// skip first bytes
//? what means the first bytes?
let i = 2;
let temp: number[] = [];
let curr: string[] = [];
let contentPosition: number;
for (; i < len; i++) {
if (buffer[i] === 0x3a) {
// :
curr.push(Buffer.from(temp).toString());
temp = [];
} else if (buffer[i] === 0x0d && buffer[i + 1] === 0x0a) {
// \r\n
// maybe empty line
if (temp.length) {
curr.push(Buffer.from(temp).toString());
temp = [];
headers.push(curr);
curr = [];
}
i += 1; // skip \n
contentPosition = i;
if (headers.at(-1)?.[0] === "Path") {
// if we get `Path`
break;
}
} else {
temp.push(buffer[i]);
}
}
const obj: Record<string, string> = {};
for (const [key, value] of headers) {
obj[key] = value;
}
const content = buffer.subarray(contentPosition!);
return { headers: obj, content };
}
// for test
function createWAVFile(
filename: string,
sampleRate = 16000,
bitDepth = 16,
channels = 1,
) {
let dataLength = 0;
let data = Buffer.alloc(0);
return {
write(chunk: Buffer) {
dataLength += chunk.length;
data = Buffer.concat([data, chunk]);
},
end() {
const byteRate = sampleRate * (bitDepth / 8) * channels;
const blockAlign = (bitDepth / 8) * channels;
// WAV head
const buffer = Buffer.alloc(44);
buffer.write("RIFF", 0); // ChunkID
buffer.writeUInt32LE(36 + dataLength, 4); // ChunkSize
buffer.write("WAVE", 8); // Format
buffer.write("fmt ", 12); // Subchunk1ID
buffer.writeUInt32LE(16, 16); // Subchunk1Size (16 for PCM)
buffer.writeUInt16LE(1, 20); // AudioFormat (1 = PCM)
buffer.writeUInt16LE(channels, 22); // Channels
buffer.writeUInt32LE(sampleRate, 24); // SampleRate
buffer.writeUInt32LE(byteRate, 28); // ByteRate
buffer.writeUInt16LE(blockAlign, 32); // BlockAlign
buffer.writeUInt16LE(bitDepth, 34); // BitsPerSample
buffer.write("data", 36); // Subchunk2ID
buffer.writeUInt32LE(dataLength, 40); // Subchunk2Size
const stream = fs.createWriteStream(filename);
stream.write(buffer);
stream.write(data);
stream.end();
console.log(`write to file ${filename}`);
},
};
}
server.listen(8080);
player.ts:
type StreamingAudioPlayerOptions = {
autoPlay: boolean;
};
export class StreamingAudioPlayer {
private context = new AudioContext();
private chunks: Blob[] = [];
private decodeChunkIndex = 0;
private buffers: AudioBuffer[] = [];
private duration = 0;
private decoding = false;
private scheduleIndex = 0;
private currentDuration = 0; // 粗略记录已播放时长,用于展示,不可用于播放控制
private state: "play" | "stop" = "stop";
private isPlaying = false; // 是否真的在播放
// 跟踪下一个缓冲区的预定播放时间
private nextScheduledTime = 0;
// 跟踪已创建的音频源
private activeSources: AudioBufferSourceNode[] = [];
private sourceSchedule = new WeakMap<AudioBufferSourceNode, [number]>();
private beginOffset = 0;
private timer: number | null;
constructor(private readonly options: StreamingAudioPlayerOptions) {}
private async decodeAudioChunks() {
if (this.decoding || this.chunks.length === 0) {
return;
}
this.decoding = true;
while (this.decodeChunkIndex < this.chunks.length) {
const originBuffer =
await this.chunks[this.decodeChunkIndex].arrayBuffer();
// Step 1: 转成 Int16
const int16 = new Int16Array(originBuffer);
// Step 2: 转成 Float32
const float32 = new Float32Array(int16.length);
for (let i = 0; i < int16.length; i++) {
float32[i] = int16[i] / 32768; // Normalize to [-1.0, 1.0]
}
// Step 3: 创建 AudioBuffer (单声道)
const audioBuffer = this.context.createBuffer(
1, // mono
float32.length,
16000, // sampleRate
);
audioBuffer.copyToChannel(float32, 0);
this.buffers.push(audioBuffer);
this.duration += audioBuffer.duration;
console.log(
`chunk ${this.decodeChunkIndex} decoded, total buffer duration: ${this.duration}`,
);
this.decodeChunkIndex++;
if (this.state === "play" && !this.isPlaying) {
console.log("ready to play");
this._play();
} else if (this.state === "stop" && this.options.autoPlay) {
this.play();
}
}
this.decoding = false;
}
async append(chunk: Blob) {
this.chunks.push(chunk);
if (!this.decoding) {
this.decodeAudioChunks();
}
}
private scheduleBuffers() {
while (this.scheduleIndex < this.buffers.length) {
if (this.nextScheduledTime - this.context.currentTime > 10) {
// 缓冲控制在 10s 左右
break;
}
const buffer = this.buffers[this.scheduleIndex];
const source = this.context.createBufferSource();
source.buffer = buffer;
// 记录并更新预定时间
const startTime = this.nextScheduledTime;
this.nextScheduledTime += buffer.duration;
source.connect(this.context.destination);
if (this.beginOffset !== 0) {
source.start(startTime, this.beginOffset);
this.beginOffset = 0;
} else {
source.start(startTime);
}
this.sourceSchedule.set(source, [startTime]);
console.log(`schedule chunk ${this.scheduleIndex}`);
this.activeSources.push(source);
const index = this.scheduleIndex;
this.scheduleIndex++;
// 监听播放结束来维护状态
source.addEventListener("ended", () => {
// 移除已结束的源
this.activeSources = this.activeSources.filter((s) => s !== source);
if (this.state !== "play") {
return;
}
console.log(`chunk ${index} play finish`);
if (this.scheduleIndex < this.buffers.length) {
// 继续安排未播放的切片
this.scheduleBuffers();
} else if (this.activeSources.length === 0) {
// 如果没有剩余的播放源,那么停止播放
this._stop();
}
});
}
}
private _play() {
// 使用计时器粗略记录已播放时长
// ?播放卡住了怎么办
const updatePlayDuration = (timestamp1: number) => {
return (timestamp2: number) => {
this.currentDuration += timestamp2 - timestamp1;
this.timer = requestAnimationFrame(updatePlayDuration(timestamp2));
};
};
this.timer = requestAnimationFrame(updatePlayDuration(performance.now()));
// 初始化播放时间为当前上下文时间
this.nextScheduledTime = this.context.currentTime;
this.isPlaying = true;
this.scheduleBuffers();
}
private _stop() {
if (this.state !== "play") {
return;
}
// 停止所有活跃的音频源
this.activeSources.forEach((source, index) => {
if (index === 0) {
// current playing source
const offset =
this.context.currentTime - this.sourceSchedule.get(source)![0];
console.log("offset:", offset);
}
source.stop();
});
cancelAnimationFrame(this.timer!);
this.timer = null;
this.activeSources = [];
// 不确定是否加载了全部的音频切片
this.state = "stop";
this.isPlaying = false;
console.log(`played duration: ${this.currentDuration}`);
}
resume() {
// 恢复播放应该依据已播放时间
// 因为已播放时间可以通过时间轴(暂未实现)调整
this.scheduleIndex = 0;
let d = 0;
for (; this.scheduleIndex < this.buffers.length; this.scheduleIndex++) {
const buffer = this.buffers[this.scheduleIndex];
if (d + buffer.duration * 1000 > this.currentDuration) {
break;
}
d += buffer.duration * 1000;
}
this.state = "play";
this.beginOffset = (this.currentDuration - d) / 1000;
console.log("resume offset", this.beginOffset);
this._play();
}
play() {
if (this.state === "play") {
return;
}
this.state = "play";
this.duration = this.buffers.reduce((total, buffer) => {
return total + buffer.duration;
}, 0);
if (this.duration === 0) {
console.warn("waiting buffer");
return;
}
this.currentDuration = 0;
this.scheduleIndex = 0;
console.log(this);
this._play();
}
pause() {
this._stop();
}
}
index.js:
// something like:
const player = new StreamingAudioPlayer({ autoPlay: true });
const ws = new Websocket("xxx");
ws.send('{"type":"data","data":"hello"}');
ws.send('{"type":"data","data":" world"}');
ws.send('{"type":"end"}');
ws.addEventListener("message", (e) => {
player.append(e.data as Blob);
});
The code is for reference only. If anyone has any better suggestions, please feel free to share your thoughts.
This is the workaround that worked for me! https://docs.hetrixtools.com/microsoft-teams-how-to-remove-name-used-a-workflow-template-to-send-this-card-get-template/
There is one commend return stderr code 5.
Example:
- name: Reset git store credentials
run: |
git config --global --unset credential.helper # First output may have successful
git config --global --unset credential.helper # Second output return `Process completed with exit code 5.`
you are pretty close. The code you are trying to use is not working because you compare an array, or every line, in the txt file with the user input for loginuser and loginpass. The way you structured your if statement also will not work properly as it will only check for loginpass, not loginuser.
the way i would recommend correcting your code is like this below
# creates and array of strings from the lines in the txt file
database = file.readlines()
# checks each string in the array
for line in database:
# check if both user inputs are on a single line
if loginuser in line and loginpass in line:
print("Login successful")
# this ends the larger whileloop, allowing the user to use other methods
y += 1
# end advanced for loop goes through each line
break
Thanks for your answer. I am able to fix the issue now. It will be great if you can share more on what does the flag
--build-remote true
And the Environment variable
SCM_DO_BUILD_DURING_DEPLOYMENT=true
Do
How do i add a column to indicate whether the mismatch appears in df1 or df2?
Is it what you're looking for? Search example.
It reutrns the latest version of org.mongodb:bson like below.
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">2</int>
<lst name="params">
<str name="q">g:org.mongodb AND a:bson</str>
<str name="core"/>
<str name="indent">off</str>
<str name="spellcheck">true</str>
<str name="fl">id,g,a,latestVersion,p,ec,repositoryId,text,timestamp,versionCount</str>
<str name="start"/>
<str name="spellcheck.count">5</str>
<str name="sort">score desc,timestamp desc,g asc,a asc</str>
<str name="rows">20</str>
<str name="wt">xml</str>
<str name="version">2.2</str>
</lst>
</lst>
<result name="response" numFound="1" start="0">
<doc>
<str name="a">bson</str>
<arr name="ec">
<str>-sources.jar</str>
<str>.pom</str>
<str>-javadoc.jar</str>
<str>.jar</str>
</arr>
<str name="g">org.mongodb</str>
<str name="id">org.mongodb:bson</str>
<str name="latestVersion">5.4.0</str>
<str name="p">jar</str>
<str name="repositoryId">central</str>
<arr name="text">
<str>org.mongodb</str>
<str>bson</str>
<str>-sources.jar</str>
<str>.pom</str>
<str>-javadoc.jar</str>
<str>.jar</str>
</arr>
<long name="timestamp">1742506485247</long>
<int name="versionCount">204</int>
</doc>
</result>
<lst name="spellcheck">
<lst name="suggestions"/>
</lst>
</response>
Redshift supported json_build_object since 2022 https://docs.aws.amazon.com/redshift/latest/dg/r_object_function.html
Still no json_agg though
In my case I needed to change the AllowOverride property in my /etc/httpd/conf/httpd.conf.
This is possible if you use VSCode. It has supported clickable hyperlinks for a number of years.
PS C:\> write-output "https://powershellgallery.com/packages/Microsoft.Graph/2.26.1"
Instead of @PostConstruct, listen to the ApplicationReadyEvvent
I ended up adapting this answer https://stackoverflow.com/a/28945444/22825680 to create a "refreshable" replay subject (which is passed refreshSubject in my example) then an operator that utilises it to replace shareReplay(1) in these instances
class CacheSubject<T> implements SubjectLike<T>
{
// Adapted from https://stackoverflow.com/a/28945444/22825680
private readonly mySubjects!: ReplaySubject<Observable<T>>;
private readonly myConcatenatedSubjects!: Observable<T>;
private myCurrentSubject!: ReplaySubject<T>;
constructor(resetSignal$?: Observable<void>)
{
this.mySubjects = new ReplaySubject<Observable<T>>(1);
this.myConcatenatedSubjects = this.mySubjects.pipe(
concatAll(),
);
this.myCurrentSubject = new ReplaySubject<T>();
this.mySubjects.next(this.myCurrentSubject);
if (resetSignal$ != null)
{
resetSignal$.subscribe({
next: () =>
{
this.reset();
},
});
}
}
public reset(): void
{
this.myCurrentSubject.complete();
this.myCurrentSubject = new ReplaySubject<T>();
this.mySubjects.next(this.myCurrentSubject);
}
public next(value: T): void
{
this.myCurrentSubject.next(value);
}
public error(err: any): void
{
this.myCurrentSubject.error(err);
}
public complete()
{
this.myCurrentSubject.complete();
this.mySubjects.complete();
// Make current subject unreachable.
this.myCurrentSubject = new ReplaySubject<T>();
}
public subscribe(observer: Observer<T>): Subscription
{
return this.myConcatenatedSubjects.subscribe(observer);
}
}
function cache<T>(
resetSignal$?: Observable<void>,
): MonoTypeOperatorFunction<T>
{
return share<T>({
connector: () =>
{
return resetSignal$ == null
? new ReplaySubject<T>(1)
: new CacheSubject(resetSignal$);
},
resetOnError: true,
resetOnComplete: false,
resetOnRefCountZero: false,
});
}
// Acts like shareReplay(1) but clears its buffer when resetSignal$ emits.
You need to build the project B, get the DLL and add it as a reference in the project A.
here is how to add a reference inside a project in the MSDN
this is built in to phpunit now: https://github.com/sebastianbergmann/phpunit/pull/6118
Since I can't comment yet, I'll add an answer, which might be helpful. I ran into "java.lang.Compiler" error too, but was using Maven. Wanted to build app with older Java version (Java 8) than my CI system runtime (Java 21) and had to wrap kie-maven-plugin execution into exec-maven-plugin with specific executable and arguments to get it working.
Yes this is the best that Python offers right now for specifying the type that was used to initialize the "self" reference. It is not perfect and you are correct that it can cause "issues" when subclassing, but it is less of an issue and moreso that it adds the tedium of having to go through and manually specify the correct typing in the string.
I am not sure that improving or altering this system is a top priority. It is annoying, but it is also not going to be a concern for 99% of people who use Python regularly (not that your confusion/irritation regarding it is invalid).
x gxvcx vxcxc xcdsf sdfsdgsd gdsg sdgsdgsdg sdgsdgs
For some reason the terminal was changing the name of my folder to Personal:Study when it was actually Personal/Study
So the path was not working for Gradle ~project/Personal:Study/FirstProject/src
Just renaming the folder to Personal-Study made it start working again :)
Well, for what it is worth, I found an answer to this by asking Copilot. Unfortunately on Visual Studio 2022 I had to change the "Environment Font" as the Copilot setting did not seem to be exposed.
Posting Copilot's response for posterity and to create a self-referential loop for the scrapers:
To change the font size in the Visual Studio GitHub Copilot Chat window, follow these steps:
Open Visual Studio.
Navigate to Tools > Options.
In the Options dialog, go to Environment > Fonts and Colors.
In the Show settings for dropdown, look for an option related to "GitHub Copilot Chat" or similar.
Adjust the font size as desired and click OK to apply the changes.
If you don't see a specific option for the GitHub Copilot Chat window, the font size might be tied to the general environment font settings. Adjust the Environment Font or similar settings to see if it affects the chat window.
for (size_t i = 1; i < myVector.size(); i+=2) {
auto X1 = myVector[i - 1].first;
auto Y1 = myVector[i - 1].second;
auto X2 = myVector[i].first;
auto Y2 = myVector[i].second;
//DrawLine(X1,Y1,X2,Y2);
}
What's the role of Python in ArcGIS?
There are 2 main parts in ArcGIS ecosystem that involve Python: ArcGIS API for Python and ArcPy.
In short, ArcGIS API for Python is mainly for writing code to build a data pipeline that work with cloud and web GIS, while ArcPy enhances customizability of traditional desktop GIS such as ArcGIS Pro, so you can glue various tools and steps together with Python script, instead of in UI.
With regard to your question, if you aren't working with Pro, it is likely that you'll build with ArcGIS API for Python. There are some pretty comprehensive documentation on our developer website.
If you have question feel free to reach out on our forum.
This might not work in all cases, but if you are receiving the 1 MB error message with regards to a Lambda function sitting in front of a Cloudfront distrubtion, you can go to the Cloudfront distros Behavior tab and add the function there with ARN and version number like this image. You need to replace the all caps letters with your use case. The ARN with version number can be found after publishing the Lambda@Edge function.
please help me,
Thank you so much. This fixed my problem!
feel free to take a look at this article. Setting translation factor to 1 should make the scene view and the real world aligned.
Are you using world-scale AR? Parallax in real world might cause confusion when you view the scene through a camera.
Also, to get responded quicker, reach out to us on the forum.
Any instruction on how to do it? For example, how to extract application from card and reinstall it but with different AID value
Not sure what I changed, but the last change was that I deleted the ivysettings.xml file and it seems to be working correctly.
have you find a solution yet? I've tried cleaning node modules, pods and reinstalling but nothing works
Studied git source code and found.
[color "diff"]
commit = yellow
func = cyan
meta = green # the color for a valid signature
frag = blue
old = red
new = green
newMoved = cyan
oldMoved = blue
context = default
whitespace = red # the color for an invalid signature
This link explain how install odoo + postgree on alpine linux
i need help with this error ExternalError: TypeError: Cannot read properties of undefined (reading 'tp$mro') on line 5 in dacoolthing.py
this is the code:
from turtle import *
class char(Turtle):
def __init__(self):
super().__init__()
self.penup()
self.shape("turtle")
self.goto(0,0)
self.speed(0)
def attack():
print()
As @Slaw stated in comments, I can use AbstractCopyTask#exclude. I don't know how I broke that before, but it works now with the following code:
jar.exclude((fileTreeElem) -> {
File theFile = fileTreeElem.getFile();
if (theFile.getPath().startsWith(customDir.getPath()))
return false;
// ... rest of the code
}
This is a known issue. It seemed to be solved with a previous release, but apparently it was not completely solved. See the Github issues here for more information and updates: https://github.com/ERGO-Code/HiGHS/issues/2146 and https://github.com/ERGO-Code/HiGHS/issues/1670
One thing you can try is to set the number of threads to one.
When you set a control object's id other than wxID_ANY in wxFormbuilder that will cause the issue. This may be a bug of new version wxFormbuilder.
Update: I used chatGPT to help me figure this out. For future people's reference, here's some sample code to help accomplish this:
import streamlit as st
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer
from reportlab.lib.pagesizes import letter
from reportlab.lib.styles import getSampleStyleSheet
import io
def create_pdf_buffer(elements):
"""Takes a list of ReportLab Flowable elements and returns a PDF buffer."""
buffer = io.BytesIO()
doc = SimpleDocTemplate(buffer, pagesize=letter)
doc.build(elements)
buffer.seek(0)
return buffer
# Streamlit app
st.title("PDF Generator with ReportLab and Streamlit")
# User input
user_list = st.text_area("Enter list items (one per line):").splitlines()
# Only proceed if there's input
if user_list:
styles = getSampleStyleSheet()
elements = []
# Build elements list outside of PDF function
elements.append(Paragraph("Sample PDF Report", styles['Title']))
elements.append(Spacer(1, 12))
for item in user_list:
elements.append(Paragraph(f"- {item}", styles['Normal']))
elements.append(Spacer(1, 6))
# Generate PDF buffer
pdf_buffer = create_pdf_buffer(elements)
# Download button
st.download_button(
label="Download PDF",
data=pdf_buffer,
file_name="report.pdf",
mime="application/pdf"
)
This is definitely the HENNGE coding challenge for the intership
Its most likely due to your anti-virus man in the middle scanning of https. Disable the "web shield" or whatever they call it in your particular anti-virus and try it again. It causes issues for composer as well which is when I ran into the issue.
You can see this related issue among others in the composer repo on github.
@scipioAfrianus asked if you can get rid of the glow / drop shadow.
You can.
fig.update_layout(
title_text="Basic Sankey Diagram",
font_family="Courier New",
font_color="blue",
font_size=12,
title_font_family="Times New Roman",
title_font_color="red",
font_shadow="None",
)
right click in the res folder -> new -> directory -> name it as layout -> click finish
after that, right click on the layout folder -> new -> look for the option XML -> click on LAYOUT XML FILE
that´s it !
Running bin/rails db:migrate after pg_restore fixed the issues for me (I had pending migration)
params.as_json does the trick concisely.
I am having the same problem, was there any resolution?
I tried manually modifying the requirements.txt file to add libglib2.0, libnss3, libgconf, and libfontconfig1 as shown on this other thread, but it didn't seem to have any effect.
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 127
Also tried connecting with SSH to pip install selenium directly in hopes the chrome driver dependencies would get updated.
You can use this Asn1 parser
It can parse/create Asn.1 data. You have to convert binary data to hex before parsing. It can also parse with specific depth.
If you want to clone just the array object container just use slice:
const a = [1,2,3,4];
const b = a.slice(0); // another array object containing the same content
Use [Inline Parameters for VSCode] (by Liam Hammett) extension. the marketplace link :
https://marketplace.visualstudio.com/items?itemName=liamhammett.inline-parameters
hope you enjoy it!
yes, First, it takes tag_parts.last, then capitalize and then replace
You want to make sure that the Google Cloud build service account that is used by the service itself has the "Cloud Build Service Account" IAM role. No idea why Google made this required role something that the service account can be removed from. But I just ran across this issue when doing work using the terraform google_project_iam_binding resource.
Quick way to manually add this:
Get the project number. For instance via gcloud projects list
Go into the GCP console
Go into IAM
Ensure you are on the project with the issue
Select: Grant Access
Principal: PROJECT#@cloudbuild.gserviceaccount.com
Role: Cloud Build Service Account
Note, the role that is actually given is "roles/cloudbuild.builds.builder". And the service account isn't something that shows in the Google Cloud console by default.
Any luck on this? I’m having the same issue.
Did you ever found the solution to this?
Im in exactly the same spot right now. I had to change the port to 8082 as well, though my issue lies with grafana. The metrics are not being output to prometheus for some reason :(
I know this is somewhat old question, but to solve this, I decided to use HttpContext.Items in my implementation of IAuthenticationService to put a value with key like AuthenticationException there, and then read that in other middleware or filter.
Note that It may look tempting to use HttpContext.Features.Get<IAuthenticateResultFeature>().AuthenticateResult, but this feature is not set for failed authentications! I've spent several hours debugging this, only to understand that I need to look for another way.
I know I am saying this after multiple years and you probably do not care for it anymore.
However, in order to run a threaded program on the IBMi you need to run it with SBMJOB and with the option ALWMLTTHD(*YES). Otherwise, you will get an error 3029. I know, it is a very stupid error to get when it has absolutely nothing to do with the real reason your threaded program is not running.
Ciao, anche io ho avuto lo stesso problema e l'ho risolto facendo cosi:
1 chiudi l'IDE Arduino ma lascia la scheda collegata alla usb
2 Apri Gestione dispositivi
3 Apri Porte e fai clic sulla freccia a discesa per selezionare la tua porta.
4 Fare clic con il pulsante destro del mouse e selezionare "Proprietà"
5 Apri la scheda "Impostazioni porta".
6 Clicca su "Avanzate" per aprire una nuova finestra
7 cambia la porta con una libera
ok ok e chiudi
riapri l'IDE arduino e prova
con me ha funzionato
I notice you're making your requests over HTTP not HTTPS, I imagine Azure is stricter on the POST requests rather than GET due to encrypting the body of the request.
Make sure you have HTTPS only turned off in Azure Portal.
It looks like the library you're trying to install runs clang and other commands of C on terminal.
As the program doesn't find those files, it gives an error.
I'm not sure how I can help you fixing that, maybe installing clang and all thoose files but anyways, that's what I've found.
I updated pycharm to the newest version and the errors were gone
The Uutf library has a fold_utf_8 function, so I've created this function:
let str_chars : string -> Uchar.t list =
Uutf.String.fold_utf_8 (fun acc _ c ->
match c with
| `Uchar u -> u::acc
| `Malformed _ -> failwith ""
) []
I still find it crazy that something so simple can be so difficult to find information on. I've lost some of my sanity trying to find the answer, so I'm posting this here so that hopefully nobody has to go through this again.
There is a recipe hosted in the meta-python-ai layer https://layers.openembedded.org/layerindex/recipe/403973/
Right-click the json file > Open with > 'Other...'
Select 'Internal editors' radio button
Select 'Text Editor'
Select 'Use it for all json files' radio button, then OK.
some possible solutions I found in regards to your problem are:
If you are unable to find usage stats, try using the command line interface in google cloud to view/set up logs:
https://cloud.google.com/logging/docs/reference/tools/gcloud-logging#getting_started
Looking at quotas:
https://developers.google.com/maps/billing-and-pricing/manage-costs#quotas
- It states that when your project reaches the quota limit "your service stops responding to requests", are you sure that your limit is properly set up? Compare how much you got charged to the pricing https://developers.google.com/maps/billing-and-pricing/pricing
- You can set up alerts (both usage and budget) at points where you haven't used up all your free usage to figure out at what rate your api calls are being used (i.e. 1000 map loads)
I am also making a game that requires the same countdown. This is what I came up with/ stole. Replace the 60 with how ever long you want the timer to be.
import sys
import time
def countdown(timer):
while timer > 0:
timer -= 1
time.sleep(1)
sys.stdout.flush()
countdown(60)
can you share your code where we can use that and replicate the issue on our end as this way its immpossible t understand what is causing it or is your website hosted online? if yes, then share the url
Yes it is possible, but there are many layers to how Sharepoint allows enterprises to provision such access. You stated you have admin access, but I do not know if that is an admin of the device, enterprise network sysadmin, general enterprise admin of Sharepoint, or specific Sharepoint site admin. Each level of these would need to be approved in order to invoke API uploads to Sharepoint, the level of headache that that will create is dependent on how your enterprise has configured everything.
In the code you shared I only see one thing potentially missing, which is the office365 AuthenticationContext class instance. The AuthenticationContext normally handles all of the credential and access hand off stuff behind the scenes and has generally be a lifesaver for me when automating Sharepoint uploads and downloads. Historically I have had mixed success when doing all the credential stuff by hand, but had it work on the first attempt almost all the time with AuthenticationContext. You can read more about it on its github file:
If you are experiencing any further issues while using AuthenticationContext, it may be worth double checking all of the syntax it requires. If you are still having issues you may need to ask your enterprise to double check that (i) they allow API uploads, and (ii) that you and how your are configured is fully allowed to execute. The Exceptions that you get in the event that either of these is not working will normally be (a) some sort of request denial straight up saying that you do not have appropriate access to something, (b) some more generally vague message that the Sharepoint you are requesting cannot be found even though you know you entered the correct one, or most frustratingly (c) some random function that is taking an API action crashes, which would be the case if the API allows the request to go through but blocks you from taking any action.
I really struggled with sqlx and a custom enum type: I ended up adding strum to generate the &str values to make the query with the enum as parameter work:
#[derive(Debug, sqlx::Type, AsRefStr)]
#[sqlx(type_name = "category_type", rename_all = "lowercase")]
pub enum CategoryType {
#[strum(serialize = "music")]
Music,
#[strum(serialize = "audiobook")]
Audiobook,
}
#[derive(Debug, sqlx::FromRow)]
pub struct Category {
pub id: sqlx::types::Uuid,
pub name: String,
pub category_type: CategoryType,
}
async fn list(db: &PgPool, category_type: CategoryType) -> Result<Vec<Category>, errors::AppError> {
let result = sqlx::query_as!(
Category,
r#"
SELECT
id, name, category_type AS "category_type!: CategoryType"
FROM categories
WHERE category_type = ($1::text)::category_type
"#,
category_type.as_ref()
)
.fetch_all(db)
.await?;
Ok(result)
}
This fells quite complicated, but I think its the right way? what do you think?
If your version of Invoke-SqlCmd does not allow usage of this flag TrustServerCertificate (for any reason, also mentioned in other answers), try to exclude this flag from execution - that was enough in my case, because in this case a certificate was not required. (Not a best advice, but works).
Seems it is possible now, documentation is available here: https://developers.facebook.com/docs/whatsapp/cloud-api/typing-indicators
You shouldn't define a custom session route when using next-auth, its already handled by NextAuth. Your custom route leads it to recursion.
Hopefully this makes sense to anyone reading this.
To get the results I needed I used helper columns. To keep it simple, I'm just going to use Group 1 and Group 2. Including Group 3 and Group 4 required additional helper columns, same as Group 1 and Group 2.
First, I broke out the date (e.g. 1/1/2025) into year (e.g. 2025) and week (e.g. 1):
=YEAR(I3)
=WEEKNUM(I3)
Second, I concatenated the two results (e.g. 1-2025):
=K3&"-"&J3
Third, for each group I used a logical Or for comparing the subgroups:
For Group 1 =IF(OR(A3>0,B3>0),1,0)
For Group 2 =IF(OR(C3>0,D3>0),1,0)
Output is "1" if the subgroups contain values greater than "0".
Fourth, for each group I used the MAXIFS formula to grab the highest value of each group within each week:
For Group 1 =MAXIFS($M$3:$M$97,$L$3:$L$97,L3)
For Group 2 =MAXIFS($O$3:$O$97,$L$3:$L$97,L3)
Fifth, I summed the outputs of each groups MAXIFS formula (e.g. Group 1 and Group 2):
=SUM(N3,P3)
Sixth, I calculated the weekly MAX number across all Groups:
=MAXIFS($Q$3:$Q$11,$L$3:$L$11,L3)
Seventh, using the UNIQUE formula I mapped out the Week and Year results into a separate column:
=UNIQUE(L3:L11)
And for the final step, I performed a vLookup to pull in the MAXIFS value of each Week...
For the first week of 1-2025: =VLOOKUP(S3,$L:$R,7,0)
For the 2nd week of 6-2025: =VLOOKUP(S4,$L:$R,7,0)
For the 3rd week of 7-2025: =VLOOKUP(S5,$L:$R,7,0)