I had the same issue BUT the error was in the script. I used a relative path in the Export-CSV path argument so, to find the csv I had to check the bin directory. The csv was there alongside the exe.
Deleting the directory and again cloning it worked for me
After some research, this is probably related to the design decisions regarding what character encoding to use in Maven.
A probable short answer is:
"Platform dependent."
In IntelliJ with pressing Alt+F12, the PowerShell is displayed.
There should be a way to set the platform dependent value. Therefore, I tried in PowerShell on Windows. Please try (I don't have the proper plugins setup to test.)
[Console]::InputEncoding = [System.Text.UTF8Encoding]::new();
[Console]::OutputEncoding = [System.Text.UTF8Encoding]::new();
./mvnw spring-boot:run;
Then, use your ./mvnw command and see if it works.
The long background related to this probable answer:
See the information about the design that the platform encoding was used by plugins.
* <<<$\{project.build.sourceEncoding\}>>> for
{{{https://cwiki.apache.org/confluence/display/MAVEN/POM+Element+for+Source+File+Encoding}source files encoding}}
(defaults to <<<UTF-8>>> since Maven 4.0.0, no default value was provided in Maven 3.x, meaning that the platform
encoding was used by plugins)
See the background of POM Element for Source File Encoding. This is a long explanation of character encoding.
Default Value
As shown by a user poll on the mailing list and the numerous comments on this article, this proposal has been revised: Plugins should use the platform default encoding if no explicit file encoding has been provided in the plugin configuration.
Since usage of the platform encoding yields platform-dependent and hence potentially irreproducible builds, plugins should output a warning to inform the user about this threat, e.g.:
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent!
This way, users can smoothly update their POMs to follow best practices.
For "[Console]::InputEncoding" discussion in PowerShell, see $OutputEncoding and [Console]::InputEncoding are not aligned -Windows Only.
Please see if this helps you.
I had the same issue, but (because my project uses Git) instead of using flutter create .
I deleted the whole project directory and then cloned it using Git
I have the same problem and I can only get it two work when I run the code under a compute recourse with the access mode set to "no isolation shared"
it is not work when the navigationcontroller has more than one child viewcontroller,then push UIHostviewController,right?
To suppress the warning and avoid false-positive "pending changes," you can add this line to your DbContext
configuration:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder
.UseNpgsql("YourConnectionStringHere")
.ConfigureWarnings(warnings =>
warnings.Ignore(RelationalEventId.PendingModelChangesWarning));
}
This line
.ConfigureWarnings(warnings =>
warnings.Ignore(RelationalEventId.PendingModelChangesWarning));
tells EF Core to ignore the warning about pending model changes, allowing Update-Database to run without requiring a redundant migration.
https://docs.telethon.dev/en/stable/basic/signing-in.html#:~:text=Note,for%20bot%20accounts.
This API ID and hash is the one used by your application, not your phone number. You can use this API ID and hash with any phone number or even for bot accounts.
可以申请多个api 而不需要多个账户
I use magit file dispatch with emacs (for reference: https://emacs.stackexchange.com/a/75703).
I am also searching for an alternative for vscode but has not been successful, I run magit with emacs terminal mode within vscode for the time I needed this particular function, stupid but it works.
After a lot of testing with configuration of the plugin and customizing the JWT token to create more dynamic roles I ended up having to write a custom version of the 'rabbit_auth_backend_oauth2' plugin to have full control over roles-to-vhost permissions.
Frustrating Azure doesn't allow for more customization as claims mapping would've worked if not for only being allowed one transformation expression/claim.
When using the HSI directly as the PLL input on STM32F303xE
devices, you also have to configure the "PREDIV" correctly. Simply setting the PLLSRC bits to "01" (HSI) in CFGR is not enough, because on these parts the pre-divider (1..16) is controlled in a separate register (RCC_CFGR2). If you do not set PREDIV to 1 there, the clock tree can become invalid and your code will lock up waiting for the PLL to become stable.
When you're using HashiCorp Vault's KV Version 2 secrets engine, fetching a specific key from within a path like /mysecrets
is not done by appending the key name to the path.The entire secret (ie, all key-value pairs under that path) is fetched at one using the API:
GET /v1/kv/data/mysecrets
This returns a structure like:
{
"data":{
"data":{
"key1":"value1",
"key1":"value1"
}
,
"metadate"{
...
}
}
}
So if you want just key1, you need to fetch the whole secret and extract the key1 from data.data.object
programatically
why the below does not work?
GET /v1/kv/data/mysecrets/key1
That path would be valid only if you stored the secret directly at /mysecrets/key1
as below:
vault kv put kv/mysecrets/key1
value=somevalue
Then you could do
GET /v1/kv/data/mysecrets/key1
and receive
{
"data":{
"data":{
"value":"somevalue",
}
,
"metadate"{
...
}
}
}
There is an open source extension for that here:
https://marketplace.visualstudio.com/items?itemName=Magenic.ado-source-cat
"Is there a problem with using int64_t instead of uint64_t here?
If all the bit-shifted values stay below 2^63, there is generally no problem with using a signed 64-bit integer. For your usage (shifting up to 1LL << 52), you're well within the range of int64_t, so you shouldn't encounter overflows or negative values.
If you conceptually treat these bit patterns as pure bitmasks, some developers prefer using uint64_t to make the signed/unsigned intent explicit.
"Is there a more idiomatic or correct way to write code like this?"
As is, the code works.
If you are using nestjs, you have to defined your entities in your module as follows
@Module({
imports: [
TypeOrmModule.forFeature([
User
]),
],
providers: [UserService, UserRepository],
exports: [UserService],
})
You may be able to "swallow" it with the pyrevit Failure swallower (link)
Be sure that form class is the first class declaration in the file.
(else the code editor is used as default)
public partial class [some_form_name] : Form
This issue occurs because the native .so libraries required by TUICallKit and TRTC SDK are not being included in the APK during the build process. To resolve this:
a.Go to Build > Clean Project in Android Studio.
b.Then, go to Build > Rebuild Project.
c.Uninstall the existing app from your device or emulator.
d.Reinstall the app after rebuilding.
This ensures that the native libraries are correctly packaged into the APK.
If anyone else is having issues with animating flex, try animate the maxHeight instead.
The problem is that the component name starts with a lower case letter.
wrong:
export default function profile() {
return (
<View>
<Text>Hello</Text>
</View>
)
}
correct:
export default function Profile() {
return (
<View>
<Text>Hello</Text>
</View>
)
}
Restarting my Mac fixed this. Also see discussion here, people suggest brew upgrade
also.
Hope this helps :)
I got the problem solved. It was most likely caused by an old version of compose (1.7.5).
I noticed, because I setup a new project to test the fullscreen notification feature without the rest of the application. Everything worked fine there. But when I copied the files into the main project and called the test classes it did not work. I upgraded all libraries used in the project. Now everything works fine.
Try:
SELECT date_trunc('year', current_date) -- Local TZ
SELECT date_trunc('year', today()) -- UTC
today()
→ returns current date in UTC
current_date
→ returns current date in local time zone
avoid store duplication (Don't create store twice - (e.g., setupStore() and store in same app)).
Wrap App in Both and .
atlast try checking with the react-router
to react-router-dom
This worked for me (W11): At the bottom left corner of Visual Studio Code, it showed that the editor was in restricted mode. To fix the issue, simply switch to "Trust Mode." Once you do that, the problem will be resolved.
It works fine now without having done anything. I guess something might have been caught in Cache inside VS or my computer. I tried to pull the repo after having opened VS for 5 days and it actually works now
Actually it's not open, it was your css style make it looks like it opened. Because you put them under the same parent element, and the parent element has display:flex
style, so the children element will keep its height the same as parent, this make it looks like opened. You can remove className="flex"
to see the difference.
There is a feature request in the Dart repository for this.
According to the issue comment you can use the third party linter "DCM" for this: https://dcm.dev/docs/rules/common/avoid-unused-parameters/
Expose the remote module as a global variable
// Inside external Angular app import { DynamicModule } from './dynamic.module';
(window as any).externalModules = { dynamic: DynamicModule };
You need to enable the preview mode:
Open the command palette (Ctrl+Shift+P or Cmd+Shift+P on Mac)
Type: Preferences: Open Settings (UI) and then hit Enter
In the settings search bar, type: workbench.editor.enablePreview
Make sure the "Workbench › Editor: Enable Preview" is checked.
Enable the preview from Quick Open (it is optional):
Still in settings, check if "Workbench › Editor: Enable Preview From Quick Open" is also enabled.
This will make sure the single-clicks from things like Go to Symbol or Quick Open also use preview.
It turns that there exist a Linux subsystem called CUSE (character device in userspace), largely inspired/being a part of FUSE (filesystem in userspace). It seems to support what I need in terms of supported system calls, namely `open()`, `ioctl()` and others.
https://github.com/torvalds/linux/blob/master/fs/fuse/cuse.c
It is worth checking it out. Using it will likely require some test setup actions to be done with elevated privileges, but that is a part of the game when dealing with drivers, I guess.
Best way I found so far is date_trunc
function:
date_trunc('year', today())
DateAdd expects a number (here: number of minutes). But you deliver a date/time value. 00:10 is equal to 0.00694444444444444 so you add with your query just a part of a minute to Endzeit. Try instead:
SET Timeslots.Endzeit = [Timeslots]![Startzeit]+[Prüfungen]![Dauer];
run
python -m pip install pyodbc
it worked on windows
in ical, use status: 'NEEDS-ACTION'
, thats put partstat: 'NEEDS-ACTION'
in raw email.
In your scss file:
ion-item {
--color: white; // Your text color
}
that should work.
🏡 Reality Mantra – Lucknow’s trusted real estate consultancy with 12+ years of experience! Explore premium plots, luxury villas & budget apartments like Aashrayam, Crown Town, and Nottingham Homz. Expert guidance, personalized service, and the best property deals. 📍 Golf City, Lucknow 📞 +91 9918137851 🌐 realitymantra.com
#RealEstateLucknow #RealityMantra #PropertyDeals #LDAApproved #LuxuryLiving
We first need to know where the cancellation is being thrown from - try connecting to backend without proxy and see if that succeeds then it's certainly not a problem from gRPC.
Also, can you put some logs around rpc error message along with status which you are seeing in client.
Did you ever manage to work this work, whilst mine seems ok, the output of the tests are not being generated into the CSV file? I've a load of tests created, but no way to actually view the outcome.
Thanks
Matt
Hi I also have this problem on my phone a32 this port are open
12388
53460
45810
65535
Here is a sample code which implements pip with agora rtc sdk
an earlier transaction can cause transactions after it to be re-executed if and only if the transactions after it read from the same memory that the earlier transaction is trying to write to.
You have quoted the reason already, and we need re-validation to check that the transactions after it still reads the same view.
I got this to do what I wanted This does infact give me the redirect link in a timely manner (just wait a second or so don't instantly click it) but it adds a constant duplicate column into my spreadsheet. "Prefilled Exercise Log Link" [Google Sheets image that illustrates the column] https://i.sstatic.net/83rmNtTK.png
Code was giving an error when I was trying to post it so here's an image
i am also trying to get this kind of api but i am not getting it
thats why i started using selenium web srcapping and getting the result
but now google has detected the bot and now i am unable to get the result could you please share you approach which you are using?
jkcbfewiuklfnbhjkohbgvukimjhjnhjknkmjn
If I'm not wrong there isn't any public API to check and access to it. maybe you can Check for No Cellular & No Wi-Fi + Emergency SOS Mode since satellite mode connected only in this situation.
Make Sure You Do This :
Add all the custom font names to info.plist with row : Fonts provided by application
There are no spaces in the font file name, example : Ubuntu-Bold.ttf not Ubuntu Bold.ttf
Make sure you add the extension .ttf or .otf
Project Name > Build Phases > Copy Bundle Resources ( Add here too )
This will remove the warnings in Xcode 16.xx
Thanks everyone.
I thought about the following design:
Inside the thread itself, I call m_promise.set_value()
after the task finishes:
std::thread([this]()
{
task();
m_promise.set_value();
});
Then, when I want to stop the thread, I do:
if(m_thread.joinable()) {
if (m_future.valid())
m_future.wait(); // Wait until the thread signals it has finished
m_thread.join();
}
This way, I guarantee that by the time I call join()
, the thread has already completed its task.
Would there be any issue with this implementation?
Also, if I want to make it even safer, I thought about adding:
if(!m_alreadyJoining.exchange(true))
m_thread.join();
to protect against joining twice in case of concurrent interrupts.
any feedback ? or what can be an issue here?
Running
sudo apt update
and then
sudo apt install --reinstall ros-noetic-catkin
Solved it for me
It got the directory /opt/ros ....... back, and catkin_make became working again
It seems like Kotlin was not installed correctly, have you tried looking into tutorials?
https://youtu.be/Fu00X0RZwyY?si=vSZbgES2siV5J8Xi
Try to uninstall Kotlin, and go through the video. But using Android Studio can make it more intuitive
Imagine a tool where you can model your own domain—Tasks, Variables, Sequences—and attach rules like 'A task must not start before its dependencies are completed' using a simple expression language. That’s what MDriven allows. It’s like building your own rules engine without needing to write all the backend code. you can check them out at https://mdriven.net/ , i was impressed by how fast i was able to complete my tasks
Add the following to /home/cwhisperer/AndroidProjects/vosim/android/app/build.gradle.kts:
android {
ndkVersion = "27.0.12077973"
...
}
But in my build.gradle.kts I have
android {
namespace = "lu.vo.bla"
compileSdk = flutter.compileSdkVersion
ndkVersion = flutter.ndkVersion
do I have to replace the variable?
First check the flutter version of your app if there is higher then 3.24.5 then just downgrade the flutter version to 3.24.5
By wrapping the BrowserRouter
import in curly brackets the error went away:
import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
import 'bootstrap/dist/css/bootstrap.min.css';
import {BrowserRouter} from 'react-router-dom';
const root = ReactDOM.createRoot(document.getElementById('root') as HTMLElement);
root.render(
<BrowserRouter>
<App />
</BrowserRouter>
);
for me the easiest way was to just use a normal input field and then use
z.string().regex(/^[1-9]\d*$/)
with this you can safely convert the string value into a number later (in your onSubmit method for example). if you need float values you would need to change the regex. Not sure if this is the right way to do it but this it works really well.
I Suggest Using TasksBoard https://tasksboard.com .. It Does Have Premium Features Such As Bard Customization but Free Version Is Enough for Me Because it Supports Board / Vertical /Horizontal View As Well As Dark Mode with general Task Add/ Delete Tasks. (in Sync with Google Tasks) And most importantly it does have built in Chrome application. Also, It have Standalone -like Application That Can Be pinned to Taskbar as Well. (Web App)
Using Seamlessly Since January .
The actual issue is with socket management of linux.
Simply put the phone on Flight mode, and get back to retrigger DHCP IP assignment as well as lease etc...
This will reset the cache.
It works after this, no need to do anything else.
I had same problem ,In my case I just reinstalled Redistributable package(vc_redist.exe file) of visual studio and it worked .
Adding line breaks isn't something that Stripe Checkout page support right now. The same question was also asked here
No there's no option to customize the Button text either.
At first, I suspected it might be related to an access token issue, so I updated the configuration to use a global token. However, that did not resolve the problem. Can anyone help me with this or suggest a workaround?
In 0.78, --client-logs flag was added based on user feedback.
npx react-native start --client-logs
I have downloaded .exe but when I extract this I don't see NQjc.jar in NetSuite JDBC Drivers folder but we have other .exe and .dll and .cer, .txt files, please help me in proper installation to get NQjc.jar
The Jenkins job that is scheduled to trigger every 15 minutes is consistently failing after midnight. During the day, when we're logged into Jenkins, the same job runs without any issues.
The failure appears to be caused by a script attempting to parse a JSON response from an API, which returns empty after midnight. As a result, the script throws a JSON decoding error because it expects a valid JSON payload.
This behavior suggests that the response from the API is either empty or malformed during that time window, likely due to the absence of a recent successful build reference.
I was experiencing this. The problem for me was that I had a container with whitespace: pre-wrap
. Removing this solved the spacing issue.
I found why
there is an attribute call count in my model.
maybe it call count attribute rather than as count in sql.
this is a stupid question lol.
thanks everyone.
I reckon its better to use pure bash script together with a hotkey on which that script is bound. Its better than to learn another language.
Check whether domutils installed..If issue persists close the vs code and restart.sometimes this is an eslint bug, try closing VSCODE and opening it again
If your *.csproj file is not placed in the folder where you're running the dotnet build
command, you're likely to face this error. You can navigate to your project folder by using the cd
command, e.g., (assuming you're in your root directory where your solution file is located) run the following commands in order: cd CSharpBiggener.Game
, dotnet build
.
Why can't I login to Azure Database for Postgres, when I'm member of admin group
Important considerations when connecting as a Microsoft Entra group member:
Use the exact name of the Entra group you're trying to connect with — spelling and capitalization matter.
Don’t use a group member’s name or alias, just the group name itself.
If the group name has spaces, put a backslash () before each space to make it work.
The sign-in token you use is only good for 5 to 60 minutes, so it’s best to grab a fresh one right before you log in to the PostgreSQL database. Refer the below Link: https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication#use-a-token-as-a-password-for-signing-in-with-psql-or-pgadmin
i may have found a solution to the problem you're describing (without having to downgrade the httpx package):
https://github.com/Shakabrahh/youtube-search-python/commit/842d6e37f479c9c49234511f7980a69f4f2bbd3f
please keep in mind that this solution is not fully tested and that some other errors may occur.
you can override your Default JDK from File > New Project Setup > Setting for new project > Build, Execution, Deployment > Build Tools > Gradle and chagne the Default Gradle JDK.
FYI: https://issuetracker.google.com/issues/214428183#comment10
You can run
conda config --remove channels intel
also as additional note - intel channel based on conda-forge deps, so it work using it for resolving dependencies instead of anaconda
Kalau lo cari situs slot yang gak zonk, wajib coba igplay . Banyak yang udah cuan!
Made simple adjustments that is the input can be negative value and user have to input the value accordingly increment or deceremnt Not the best and scaleabel approach but does the work Can you guyz please suggest me a better one please
php
echo "success";//if success
javascript
if(data=="success") {
window.location.href = "target.html"
else
{
$('#form-messages").html(data)
}
🔍 Problems in your code: Incorrect Prime Check Logic:
You're looping for (i = 2; i <= a; i++), and inside that, you're modifying a (a++). That's the core issue — modifying the loop condition variable inside the loop can cause an infinite loop.
if(a % i != 0) is not sufficient to check primality — a number is prime only if it's not divisible by any number from 2 to √n.
You're logging prime or not prime for every iteration, which isn’t correct behavior
I don't want to change my case name. So I add a specific tag "Try Run" [Tags] Try Run
and use robot --include "Try Run"
You can consider using the undocumented method on the prisma client called _createPrismaPromise
which takes one argument, a function to wrap, and returns a PrismaPromise
, which can then be used in prisma transactions.
added these lines of code in my handler.js
to solve the issue.
const configPath = path.join(process.cwd(),'standalone/.next/required-server-files.json');
const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
const nextConfig = JSON.stringify(config?.config);
process.env.__NEXT_PRIVATE_STANDALONE_CONFIG = nextConfig;
There are multiple issues with the sql snippet you provided (which I ASSUME is going to be used in javascript during a mysql call?), given your desired result. As I understand it you would like to see a count of the rows that have a specific value for the 'code' column. Assuming you do not need the id returned as well, and only need the count:
SELECT
COUNT(DISTINCT CASE WHEN Code = 'S03' THEN ID END) AS CodeFound,
COUNT(DISTINCT CASE WHEN Code != 'S03' THEN ID END) AS NoCode
FROM A
That should be enough to return it to you. If you needed to check multiple codes at the same time then it would be a good idea to use 'in' or 'not in', but I would add that you will need parentheses around the value(s) such as
not in ('S03')
Shadyar's answer works, but just be really careful not to mess up /etc/passwd (like I did), or it's very hard to correct.
Another (?safer) way is to just append a "cd /your/target/starting/directory/" to the ~/.bashrc file. As suggested by Aigbe above.
@paulo can u gimme me examples i also face the same issues when to delete image, becuase sometimes using retention rules image alredy deleted but still needed in the k8s when need to rollback apps version to the earlier that image alredy deleted by retention rules, basicly i only want to delete all image that none in the list k8s not exists, get all image in k8s if not exstis delete all
I am looking for a solution for a similar problem... i have an excel sheet with 100 rows each containing a unique word .. and I have a pdf file which contains 1000s of sentences and those words.. is there any way where i can just upload the excel file and pdf reader takes one word at a time searches for it through the pdf and once all the words are searched for ... returns to me a pdf with all those words i am looking for in highlighted text
You are using scram 512 right ?
Scram doesn't require serviceName="kafka";
in jaas file.
You can also use below config in server.properties instead of separate jaas file.
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafkaadmin" password="kafkaadmin123456";
Ref : https://docs.confluent.io/platform/7.0/kafka/authentication_sasl/authentication_sasl_scram.html
I'd suggest using ComponentPropsWithRef or ComponentPropsWithoutRef as is or extends with other props if you need
import { ComponentPropsWithRef } from 'react'
...
ComponentPropsWithRef<ElementType>
Me pasa seguido cuando hay multiples señales con errores, colores gris o amarillo, revisa nodos que requieran las compuertas OR o terminales default mal nombradas
Yes, this can be done with SVG. Will provide more details soon.
you can refer to this link https://support.huaweicloud.com/intl/zh-cn/basics-terraform/terraform_0021.html , teach you how to config backend. Can google translate to english
Without seeing the rest of the code, I think the most likely problem is doing all of this before you have the screen initialized. After all, if the screen has not been created yet, where would you expect the input to appear? Here is how I would change it:
import turtle as trtl
troll = trtl.Turtle()
# Create the screen first
wn = trtl.Screen()
clr = input("give a color pls: ")
# properly use bgcolor as a function from turtle, not a standalone function
wn.bgcolor(clr)
wn.mainloop()
In the document https://www.w3.org/TR/fetch-metadata/ says:
To set the Sec-Fetch-Dest header for a request r:
Assert: r’s url is a potentially trustworthy URL.
And "potentially trustworthy URL" is defined here:
https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy
the items 3 and 4 on this document section says:
3. If origin’s scheme is either "https" or "wss", return "Potentially Trustworthy". ...
4. If origin’s host matches one of the CIDR notations 127.0.0.0/8 or ::1/128 [RFC4632], return "Potentially Trustworthy".
So, yes, apparently these headers are only sent if you are running from HTTPS or from localhost or any other special URL, but not from http
You're absolutely right to be cautious when things seem too easy. But, here is some reason why option 2 and 3 is bad options
Option 1 (some server like Apache/FastCGI) :
Option 2 (run builded in php server php -S localhost:8000) :
"Easier" – true. But there are gotchas.
Not production-ready: PHP’s built-in server is single-threaded and not intended for production use.
No HTTPS support, no access logging, no protection layers like mod_security.
It may hang or drop requests under load (even low load with SSR + multiple API hits).
Option 3 (use cli command form JS on server side like php index.php --someController='MyController') :
"Must be faster?" Maybe. But... here's the devil in the details.
You lose all the benefits of an HTTP server, No persistent processes, no connection pooling, no caching.
Harder to scale later (parallel CLI processes can spike memory/CPU).
Error handling is painful – think stderr, exit codes, etc.
That is my opinion about it, let's share about your thought.
If you want to make the grid items stretch to fit the screen you might want to use flexbox instead with grow and wrap. Grids make evenly sized spaces to put your items in so your content won't fill the screen if it only fills one grid space. Maybe look into col-span if you want to stretch to fit the screen while still using grid.
You will need a converter to jwt , please check https://medium.com/@wirelesser/oauth2-write-a-resource-server-with-keycloak-and-spring-security-c447bbca363c
This is work for me
PS C:\WINDOWS\system32>Set-Location "set_the_path"
You can use @dynamic decorator. https://www.union.ai/docs/flyte/user-guide/core-concepts/workflows/dynamic-workflows/
Finally I find my way to make it happen, so I'm here to put my solution, is case someone is facing the same problem as me.
Because we need to send some special headers to Azure service when create the websoket connection, so we need a proxy server (native Websocket in browser cannot send coustom headers).
server.ts
:
import http from "http";
import * as WebSocket from "ws";
import crypto from "crypto";
import fs from "fs";
import path from "path";
// Azure tts
const URL =
"wss://<your_azure_service_origin>.tts.speech.microsoft.com/cognitiveservices/websocket/v2";
const KEY = "your_azure_service_key";
const server = http.createServer((req, res) => {
res.end("Server is Running");
});
server.on("upgrade", (req, socket, head) => {
const remote = new WebSocket.WebSocket(URL, {
headers: {
"ocp-apim-subscription-key": KEY,
"x-connectionid": crypto.randomUUID().replace(/-/g, ""),
},
});
remote.on("open", () => {
console.log("remote open");
const requestId = crypto.randomUUID().replace(/-/g, "");
const now = new Date().toISOString();
// send speech.config
remote.send(
[
`X-Timestamp:${now}`,
"Path:speech.config",
"",
`${JSON.stringify({})}`,
].join("\r\n"),
);
// send synthesis.context
remote.send(
[
`X-Timestamp:${now}`,
"Path:synthesis.context",
`X-RequestId:${requestId}`,
"",
`${JSON.stringify({
synthesis: {
audio: {
// outputFormat: "audio-16khz-32kbitrate-mono-mp3",
outputFormat: "raw-16khz-16bit-mono-pcm",
metadataOptions: {
visemeEnabled: false,
bookmarkEnabled: false,
wordBoundaryEnabled: false,
punctuationBoundaryEnabled: false,
sentenceBoundaryEnabled: false,
sessionEndEnabled: true,
},
},
language: { autoDetection: false },
input: {
bidirectionalStreamingMode: true,
voiceName: "zh-CN-YunxiNeural",
language: "",
},
},
})}`,
].join("\r\n"),
);
const client = new WebSocket.WebSocketServer({ noServer: true });
client.handleUpgrade(req, socket, head, (clientWs) => {
clientWs.on("message", (data: Buffer) => {
const json = JSON.parse(data.toString("utf8")) as {
type: "data" | "end";
data?: string;
};
console.log("Client:", json);
remote.send(
[
`X-Timestamp:${new Date().toISOString()}`,
`Path:text.${json.type === "data" ? "piece" : "end"}`,
"Content-Type:text/plain",
`X-RequestId:${requestId}`,
"", // empty line
json.data || "",
].join("\r\n"),
);
});
const file = createWAVFile(`speech/${Date.now()}.wav`);
remote.on("message", (data: Buffer, isBinary) => {
// console.log("Remote, isBinary:", isBinary);
const { headers, content } = parseChunk(data);
console.log({ headers });
if (isBinary) {
if (headers.Path === "audio") {
// why we need to skip the first byte
const audioContent = content.subarray(1);
if (audioContent.length) {
file.write(audioContent);
clientWs.send(audioContent);
}
}
} else if (headers.Path === "turn.end") {
file.end();
}
});
clientWs.on("close", () => {
console.log("client close");
remote.close();
});
clientWs.on("error", (error) => {
console.log("client error", error);
});
});
remote.on("close", (code, reason) => {
console.log("remote close", reason.toString());
});
remote.on("error", (error) => {
console.log("remote error", error);
});
});
});
function parseChunk(buffer: Buffer) {
const len = buffer.length;
const headers: string[][] = [];
// skip first bytes
//? what means the first bytes?
let i = 2;
let temp: number[] = [];
let curr: string[] = [];
let contentPosition: number;
for (; i < len; i++) {
if (buffer[i] === 0x3a) {
// :
curr.push(Buffer.from(temp).toString());
temp = [];
} else if (buffer[i] === 0x0d && buffer[i + 1] === 0x0a) {
// \r\n
// maybe empty line
if (temp.length) {
curr.push(Buffer.from(temp).toString());
temp = [];
headers.push(curr);
curr = [];
}
i += 1; // skip \n
contentPosition = i;
if (headers.at(-1)?.[0] === "Path") {
// if we get `Path`
break;
}
} else {
temp.push(buffer[i]);
}
}
const obj: Record<string, string> = {};
for (const [key, value] of headers) {
obj[key] = value;
}
const content = buffer.subarray(contentPosition!);
return { headers: obj, content };
}
// for test
function createWAVFile(
filename: string,
sampleRate = 16000,
bitDepth = 16,
channels = 1,
) {
let dataLength = 0;
let data = Buffer.alloc(0);
return {
write(chunk: Buffer) {
dataLength += chunk.length;
data = Buffer.concat([data, chunk]);
},
end() {
const byteRate = sampleRate * (bitDepth / 8) * channels;
const blockAlign = (bitDepth / 8) * channels;
// WAV head
const buffer = Buffer.alloc(44);
buffer.write("RIFF", 0); // ChunkID
buffer.writeUInt32LE(36 + dataLength, 4); // ChunkSize
buffer.write("WAVE", 8); // Format
buffer.write("fmt ", 12); // Subchunk1ID
buffer.writeUInt32LE(16, 16); // Subchunk1Size (16 for PCM)
buffer.writeUInt16LE(1, 20); // AudioFormat (1 = PCM)
buffer.writeUInt16LE(channels, 22); // Channels
buffer.writeUInt32LE(sampleRate, 24); // SampleRate
buffer.writeUInt32LE(byteRate, 28); // ByteRate
buffer.writeUInt16LE(blockAlign, 32); // BlockAlign
buffer.writeUInt16LE(bitDepth, 34); // BitsPerSample
buffer.write("data", 36); // Subchunk2ID
buffer.writeUInt32LE(dataLength, 40); // Subchunk2Size
const stream = fs.createWriteStream(filename);
stream.write(buffer);
stream.write(data);
stream.end();
console.log(`write to file ${filename}`);
},
};
}
server.listen(8080);
player.ts
:
type StreamingAudioPlayerOptions = {
autoPlay: boolean;
};
export class StreamingAudioPlayer {
private context = new AudioContext();
private chunks: Blob[] = [];
private decodeChunkIndex = 0;
private buffers: AudioBuffer[] = [];
private duration = 0;
private decoding = false;
private scheduleIndex = 0;
private currentDuration = 0; // 粗略记录已播放时长,用于展示,不可用于播放控制
private state: "play" | "stop" = "stop";
private isPlaying = false; // 是否真的在播放
// 跟踪下一个缓冲区的预定播放时间
private nextScheduledTime = 0;
// 跟踪已创建的音频源
private activeSources: AudioBufferSourceNode[] = [];
private sourceSchedule = new WeakMap<AudioBufferSourceNode, [number]>();
private beginOffset = 0;
private timer: number | null;
constructor(private readonly options: StreamingAudioPlayerOptions) {}
private async decodeAudioChunks() {
if (this.decoding || this.chunks.length === 0) {
return;
}
this.decoding = true;
while (this.decodeChunkIndex < this.chunks.length) {
const originBuffer =
await this.chunks[this.decodeChunkIndex].arrayBuffer();
// Step 1: 转成 Int16
const int16 = new Int16Array(originBuffer);
// Step 2: 转成 Float32
const float32 = new Float32Array(int16.length);
for (let i = 0; i < int16.length; i++) {
float32[i] = int16[i] / 32768; // Normalize to [-1.0, 1.0]
}
// Step 3: 创建 AudioBuffer (单声道)
const audioBuffer = this.context.createBuffer(
1, // mono
float32.length,
16000, // sampleRate
);
audioBuffer.copyToChannel(float32, 0);
this.buffers.push(audioBuffer);
this.duration += audioBuffer.duration;
console.log(
`chunk ${this.decodeChunkIndex} decoded, total buffer duration: ${this.duration}`,
);
this.decodeChunkIndex++;
if (this.state === "play" && !this.isPlaying) {
console.log("ready to play");
this._play();
} else if (this.state === "stop" && this.options.autoPlay) {
this.play();
}
}
this.decoding = false;
}
async append(chunk: Blob) {
this.chunks.push(chunk);
if (!this.decoding) {
this.decodeAudioChunks();
}
}
private scheduleBuffers() {
while (this.scheduleIndex < this.buffers.length) {
if (this.nextScheduledTime - this.context.currentTime > 10) {
// 缓冲控制在 10s 左右
break;
}
const buffer = this.buffers[this.scheduleIndex];
const source = this.context.createBufferSource();
source.buffer = buffer;
// 记录并更新预定时间
const startTime = this.nextScheduledTime;
this.nextScheduledTime += buffer.duration;
source.connect(this.context.destination);
if (this.beginOffset !== 0) {
source.start(startTime, this.beginOffset);
this.beginOffset = 0;
} else {
source.start(startTime);
}
this.sourceSchedule.set(source, [startTime]);
console.log(`schedule chunk ${this.scheduleIndex}`);
this.activeSources.push(source);
const index = this.scheduleIndex;
this.scheduleIndex++;
// 监听播放结束来维护状态
source.addEventListener("ended", () => {
// 移除已结束的源
this.activeSources = this.activeSources.filter((s) => s !== source);
if (this.state !== "play") {
return;
}
console.log(`chunk ${index} play finish`);
if (this.scheduleIndex < this.buffers.length) {
// 继续安排未播放的切片
this.scheduleBuffers();
} else if (this.activeSources.length === 0) {
// 如果没有剩余的播放源,那么停止播放
this._stop();
}
});
}
}
private _play() {
// 使用计时器粗略记录已播放时长
// ?播放卡住了怎么办
const updatePlayDuration = (timestamp1: number) => {
return (timestamp2: number) => {
this.currentDuration += timestamp2 - timestamp1;
this.timer = requestAnimationFrame(updatePlayDuration(timestamp2));
};
};
this.timer = requestAnimationFrame(updatePlayDuration(performance.now()));
// 初始化播放时间为当前上下文时间
this.nextScheduledTime = this.context.currentTime;
this.isPlaying = true;
this.scheduleBuffers();
}
private _stop() {
if (this.state !== "play") {
return;
}
// 停止所有活跃的音频源
this.activeSources.forEach((source, index) => {
if (index === 0) {
// current playing source
const offset =
this.context.currentTime - this.sourceSchedule.get(source)![0];
console.log("offset:", offset);
}
source.stop();
});
cancelAnimationFrame(this.timer!);
this.timer = null;
this.activeSources = [];
// 不确定是否加载了全部的音频切片
this.state = "stop";
this.isPlaying = false;
console.log(`played duration: ${this.currentDuration}`);
}
resume() {
// 恢复播放应该依据已播放时间
// 因为已播放时间可以通过时间轴(暂未实现)调整
this.scheduleIndex = 0;
let d = 0;
for (; this.scheduleIndex < this.buffers.length; this.scheduleIndex++) {
const buffer = this.buffers[this.scheduleIndex];
if (d + buffer.duration * 1000 > this.currentDuration) {
break;
}
d += buffer.duration * 1000;
}
this.state = "play";
this.beginOffset = (this.currentDuration - d) / 1000;
console.log("resume offset", this.beginOffset);
this._play();
}
play() {
if (this.state === "play") {
return;
}
this.state = "play";
this.duration = this.buffers.reduce((total, buffer) => {
return total + buffer.duration;
}, 0);
if (this.duration === 0) {
console.warn("waiting buffer");
return;
}
this.currentDuration = 0;
this.scheduleIndex = 0;
console.log(this);
this._play();
}
pause() {
this._stop();
}
}
index.js
:
// something like:
const player = new StreamingAudioPlayer({ autoPlay: true });
const ws = new Websocket("xxx");
ws.send('{"type":"data","data":"hello"}');
ws.send('{"type":"data","data":" world"}');
ws.send('{"type":"end"}');
ws.addEventListener("message", (e) => {
player.append(e.data as Blob);
});
The code is for reference only. If anyone has any better suggestions, please feel free to share your thoughts.
This is the workaround that worked for me! https://docs.hetrixtools.com/microsoft-teams-how-to-remove-name-used-a-workflow-template-to-send-this-card-get-template/
There is one commend return stderr code 5.
Example:
- name: Reset git store credentials
run: |
git config --global --unset credential.helper # First output may have successful
git config --global --unset credential.helper # Second output return `Process completed with exit code 5.`
you are pretty close. The code you are trying to use is not working because you compare an array, or every line, in the txt file with the user input for loginuser and loginpass. The way you structured your if statement also will not work properly as it will only check for loginpass, not loginuser.
the way i would recommend correcting your code is like this below
# creates and array of strings from the lines in the txt file
database = file.readlines()
# checks each string in the array
for line in database:
# check if both user inputs are on a single line
if loginuser in line and loginpass in line:
print("Login successful")
# this ends the larger whileloop, allowing the user to use other methods
y += 1
# end advanced for loop goes through each line
break