I got the same error on my vs code. Well this happens when you have initialized .git in a higher level folder.
Ways to solve it:
Through file explorer (windows/mac)
locate the folder that has the 10k commit issue on your file explorer
keep looking for .git folder in the parent folders
if found, delete the .git folder
Through VS Code Terminal (Hard Way)
open terminal in vs code using ctrl+`
first locate to your folder which has 10k commits issue eg: cd "C:/Windows/Deepanshu/Projects/IssueFolder"
once reached use command: cd .. (to go to parent folder) and the ls (to check the list of folders and files the current folder)
once the .git folder is found, delete it using the command: rm ".git"
(this is my first answer in stackoverflow, I hope it solves the problem)
Modify your repository method as follows if you simply require product names.
@Query(value = "SELECT o.product FROM orders o", nativeQuery = true)
List<String> findAllProduct();
Since August 2023 including a pipeline from another repository is now possible, but only in the Bitbucket Premium Plan.
See this Atlassian Blog article from Aug 24, 2023: [New Premium feature] Share pipeline workflow configurations across your repositories
I've written a more detailed answer on a more recent SO question about this with more context.
To check if the feature is still Premium only, visit Bitbucket's Pricing page and check which plans include the "Scale CI workflows" Core Feature.
I was able to make it work thanks to the advices of Lundin and JimmyB in the comments.
NOTE: This code works on my phisical Esp32(DEVKIT 1) since I got it. I have not tested it on the emulator.
What I was missing:
What I discovered:
I post here a working code in case someone gets stuck like me.
#include <stdio.h>
#include <unistd.h>
#include <inttypes.h>
#define GPIO_OUT_W1TS_REG (*(volatile uint32_t*)0x3FF44008)
#define GPIO_OUT_W1TC_REG (*(volatile uint32_t*)0x3FF4400C)
#define GPIO_ENABLE_W1TS_REG (*(volatile uint32_t*)0x3FF44024)
#define GPIO_ENABLE_W1TC_REG (*(volatile uint32_t*)0x3FF44028)
#define GPIO_FUNC0_OUT_SEL_CFG_REG 0x3FF44530
#define LEDC_CONF_REG (*(volatile uint32_t*)0x3FF59190)
#define LEDC_HSCH0_CONF0_REG (*(volatile uint32_t*)0x3FF59000)
#define LEDC_HSCH0_CONF1_REG (*(volatile uint32_t*)0x3FF5900C)
#define LEDC_HSCH0_DUTY_REG (*(volatile uint32_t*)0x3FF59008)
#define LEDC_HSCH0_DUTY_R_REG (*(volatile uint32_t*)0x3FF59010)
#define LEDC_HSCH0_HPOINT_REG (*(volatile uint32_t*)0x3FF59004)
#define LEDC_HSTIMER0_CONF_REG (*(volatile uint32_t*)0x3FF59140)
#define IO_MUX_GPIO26_REG (*(volatile uint32_t*)0x3FF49028)
#define DPORT_PERIP_CLK_EN_REG (*(volatile uint32_t*)0x3FF000C0)
#define DPORT_PERIP_RST_EN_REG (*(volatile uint32_t*)0x3FF000C4)
#define LEDC_HSTIMER0_VALUE_REG (*(volatile uint32_t*)0x3FF59144)
#define resolution (uint)8
void app_main(void)
{
printf("test\n");
DPORT_PERIP_CLK_EN_REG |= (1<<11);// enable clock for ledc
LEDC_HSTIMER0_CONF_REG &= ~(0xf);
LEDC_HSTIMER0_CONF_REG |= resolution; //resolution = 8 bit
uint divider = 80000000 / (5000 * 256);
LEDC_HSTIMER0_CONF_REG |= (divider<<13);
LEDC_HSCH0_CONF0_REG &= ~(0b00); //timer 0
LEDC_HSCH0_CONF0_REG |= (1<<2); //enale output channel
LEDC_HSCH0_HPOINT_REG = 1; // value to set high
LEDC_HSCH0_DUTY_REG &= ~(0xffffff);
LEDC_HSCH0_DUTY_REG |= (20<<4); // low duty cycle
uint low = 1; // flag to control next duty value
// gpio setting
volatile uint32_t* gpio26_cfg = (volatile uint32_t*)GPIO_FUNC0_OUT_SEL_CFG_REG + 26;
// peripheral 71 -> hschan0
*gpio26_cfg = 71;
GPIO_ENABLE_W1TS_REG |= (1<<26);
// function 2
IO_MUX_GPIO26_REG &= ~(0b111 << 12);
IO_MUX_GPIO26_REG |= (2<<12);
LEDC_HSCH0_CONF1_REG |= (1<<31); // start channel duty cycle
LEDC_HSTIMER0_CONF_REG &= ~(1<<24); //reset timer
uint counter = 0;
while (1) {
if (counter == 2){
if (low == 0) {
LEDC_HSCH0_DUTY_REG &= ~(0xffffff);
LEDC_HSCH0_DUTY_REG |= (30<<4);
low = 1;
LEDC_HSCH0_CONF1_REG |= (1<<31); // start channel duty cycle
} else {
LEDC_HSCH0_DUTY_REG &= ~(0xffffff);
LEDC_HSCH0_DUTY_REG |= (128<<4);
low = 0;
LEDC_HSCH0_CONF1_REG |= (1<<31); // start channel duty cycle
}
counter = 0;
}
printf("timer value: %" PRIu32 "\n", LEDC_HSTIMER0_VALUE_REG);
printf("duty value: %" PRIu32 "\n", LEDC_HSCH0_DUTY_R_REG);
printf("counter: %d\n", counter);
sleep(1);
counter++;
}
}
Since I am a newbie in C please feel free to correct me as Lundin did, I will appreciate it.
I’ve also been trying to fetch Google reviews using the new Places API with an unrestricted API key and a properly set billing account, but I’m still getting REQUEST_DENIED
. I’ve confirmed that my API key works for other requests (like address components) but fails for reviews. It seems like accessing reviews might require the Enterprise SKU, but I haven’t found a way to enable it. Has anyone managed to solve this?
I was able to get this to work. In applicationWillTerminate(_)
you can schedule a notification as long as you do it synchronously. If your normal scheduling using async functions you may need to create a separate function or do it inline. Just make sure you do it as quickly as possible so the time limit doesn't run out. Also, you need to schedule a second or two in the future, as it will not fire with the current date because the time is passed by the time it gets scheduled.
Since calling ReleaseCapture()
inside WM_NCLBUTTONDOWN
did not work, try explicitly forwarding the message to DefWindowProc after processing WM_NCLBUTTONDOWN
. This ensures the system’s default behavior is not interfered with.
Modify your WM_NCLBUTTONDOWN
handler to explicitly release capture.
i have been coding by myself for a year and i wanted to make a similar game but it doesn't work with me either. My code is close to identical to yours,
import turtle
# create Screen
wn = turtle.Screen()
wn.bgcolor("light green")
# Create player
player = turtle.Turtle()
player.color("blue")
player.shape("triangle")
player.penup()
speed = 1
#create the inputs
def left():
player.left(30)
def right():
player.right(30)
#Key input actions
turtle.listen()
turtle.onkeypress(left, "Right")
turtle.onkeypress(right, "Left")
while True:
player.forward(speed)
i've tried everything and it just doesn't work. I think that it does not work because the program is not listening (although i fired the function listen()) because i have put this line in my program:
def print_test():
print("works")
with this in the keyinputactions
turtle.onkeypress(print_test, "Space")
and it gave me nothing as an output. I think you might have the same problem. I however do not know how to fix this because of my lack in expertise.
I hope this helped, although i do not think that you are still working on this problem after 2 years.
do you consider using shader graph?
columns = [col for col in df.columns if col.startswith(('A_', 'B_'))]
df.loc[:, columns]
Add your original apple account on teams.
target--> signing & capabilities --> teams.
An alternative way:
def int_sci(value:str) -> int:
return int(eval(value.replace("e","*10**")))
int_sci("1e1")
>>> 10
int_sci("10e-1")
>>> 1
int_sci("1e23")
>>> 100000000000000000000000
Lets break it down step by step.
Let's define some operation xyz that the user can perform on the website.
You want a user to do xyz operation via a bot.
You want a sort of acknowledgement that xyz operation has been completed
[Help me understand]: How "bot" sends the notif that xyz is succesful? It should rather be the point where xyz operation has been performed, right?
I will make an assumption that your website [referred to as server now onwards] will send the notif to channel.
Now how exactly should we make it?
You do not want your bot to do any sort of heavy lifting. For example, checking if there did exist an error, is not a responsibility of bot. Why?
Because the bot is not a "client". Its just someone who communicates to your server "Hey, I received xyz message".
All sort of parsing shall happen within the server.
Another reason is, this makes the bot extensible, you do not need to change code of the bot to perform abc operation.
In my belief [Design is subjective]. The addition of profiles should be async. So, kafka/rmq over webhook/websocket. Why?
I am not sure, how retry-ability will work with webhook/socket. But I am sure its better with kafka and rmq, since they have dead-letter-queue. Retry-ability is important since, in production environments, connections are prone to break or operation is prone to fail etc. You do not want to loose a message with no logs.
The nature of business is async. Once your bot communicates, "hey something happened". It has no business to know what happened with "hey something".
Any further clarifications are welcome.
If you‘re using the evolve extension, another approach is to use amend
to change the root commit‘s branch name and then propagating the change with evolve
:
hg up 'roots(branch(stiging))' # root commit of branch to be renamed
hg branch -f staging
hg amend
hg evolve
I have successfully managed to run a Raspberry Pi 4 Compute Module (CM) in QEMU. You can check my GitHub guide for instructions:
https://github.com/Apex-Source/Revolution-PI-Qemu/
I use QEMU to emulate my Revolution Pi device for integration testing. The Revolution Pi is based on the Raspberry Pi Compute Module 4 (CM4), so this method might also work for a standard Raspberry Pi.
When you process the WM_NCLBUTTONDOWN, make sure you let DefWindowProc handle it only if it's not a click on one of your custom buttons.
If a prefetch_abort or a data_abort occurs the running program can not continue. On a simple platform you just stop by entering a while(1){} loop. Time to get the debugger.
On a more advanced system you can try to terminate the current program or send a signal to it but continuing is not possible.
Most of the code highlighters do not support br
tags as a line break. So, you need to replace all the br
at into \n
. Hope it will work. It is working for me.
Simply change "==" to "=" in " user_num == int(input('Enter integer:\n')) "
Dealing with the same issue, I found a bug report for it on x2go.org. Looks like it has to do with the way that the x2go team digitally signs their executable is not in line with the way Windows expects it to be signed, so it views it as coming from an unknown author.
https://bugs.x2go.org/cgi-bin/bugreport.cgi?bug=1444
For Chrome, I was able to just go to my downloads page and select the option to download it anyway.
Underscore is not recommended for headers.
But since you can’t modify your client request, accept access_token as is in method request and modify it via integration request using integration request headers.
This way, you can accept the access_token value sent by client and map it to access-token before sending it to your service endpoint.
How about using requirements.txt
?
requirements.txt
djlint==1.36.4
Then run
$ pip install -r requirements.txt
Regular Public CAs like DigiCert, GlobalSign, etc., are not allowed to issue SSL certificates for Internal Names (IP addresses, localhost, Internal URLs, and Server names) due to restrictions imposed by the CA/Browser Forum in 2015.
SecureNT Intranet SSL and GlobalSign issue SSL certificates for intranets using non-public CA certificates. While these SSL certificates are technically the same as those issued by Public CA authorities, browsers do not trust their root certificates. So, the customer is required to install the non-public CA roots once on each of their client PCs (devices). Of course, this can be done using Microsoft Group Policy (GPO).
Some vendors issue free trial certificates for testing purposes.
It's not supported on PHP 7.3 because it was only introduced in PHP 7.4
How to sum an array in aws step function
Hi Everyone,
I'm excited to share my technical document with you! I’d love to hear your thoughts—let me know if you find it interesting. Your feedback would mean a lot and will help me improve. Looking forward to your insights!"
Please do give it a read; I hope this helps.
Let me know if you want a different tone—more formal, casual, or concise! 🚀
NOT WORKING!!!!!!!!
import turtle
def red_star(t, side):
t.fillcolor('red')
t.pencolor('black') # to highlight the lines
t.pendown()
t.width(3)
t.begin_fill()
for i in range(5):
t.fd(side)
t.left(72)
t.forward(side)
t.right(144)
t.end_fill()
t = turtle.Turtle()
s = t.screen
s.delay(0)
t.hideturtle()
t.penup()
red_star(t, 50)
s.exitonclick()
s.mainloop()
There is a catalog website about MCP ,site:https://mcp.ad which contains many implementations of MPC server and Clients, as well as source code sites. I hope it can help you
For now, I have found the least optimal solution in conntrack.
sudo conntrack -E -p $PROTOCOL --dport $PORT
By consuming the lines including [NEW]
and [DESTROY]
, I can track the number of "active" connections to the server and pause the process when the connection number equals zero. This option is not great because it is linux-specific, requires root (or CAP_NET_ADMIN
), and requires another conntrack process for every port, protocol, and IP version combination. The last point can probably be improved by using the conntrack library instead, but I'd still like to see other answers that don't have these requirements.
Thank You for this answer @Bowman Zhu . This was pretty simple but saved me :)
just use the jsonEncode and toJson functions like you did with jsonDecode and fromJson
String serialize(List<MyModel> data) {
final list = data.map((map) => map.toJson()).toList();
return jsonEncode(list);
}
Your issue arises because in your transactions
table, entity_id
is likely stored as a VARCHAR
(or TEXT
), whereas the id
column in the users
table is of type BIGINT
. PostgreSQL does not automatically cast VARCHAR
to BIGINT
when comparing them in a WHERE
clause.
The answer/solution to this question is to use a WaveFileWriter and save the bytes directly to a MemoryStream rather a file. The WaveFileWritter wraps the raw bytes[] as a formatted Wave stream. Change rate and channels as needed.
writerMem = new WaveFileWriter(memStream, new WaveFormat(44100, 16, 1));
Write read bytes to the WaveFileWriter:
writerMem.Write(e.Buffer, 0, e.BytesRecorded);
Call the API passing the Memory Stream only
var recResult = speechToText.Recognize(
audio: memStream,
model: "pt-BR_Multimedia",
contentType: "audio/wav");
This way, the API accepts the MemoryStream and identify the WAVE stream from within.
the issue here is that there is no sst.py
inside /app/app/routes/
, but there is an stt.py
. It seems like you have a typo in your import.
The following import is correct
from .stt import speech_to_text
OMG, I'm so embarrassed I didn't see the App.css stylesheet that was created. sorry for wasting everyones time.
Considering twisted process of setting up venv for apache2, It would be preferable to see if libraries used in venv can be copied to system path and whole project can be run as native wsgi process. Just get libraries in venv. then, carefully copy to system path (/usr/lib/python/dist-packages) without overwriting anything.
add "Scene#0" to the end of the path
#[derive(AssetCollection, Resource)]
pub struct ShipAssets {
#[asset(path = "3d/ships/boat-sail-a.glb#Scene0")]
sail_a: Handle<Scene>,
#[asset(path = "3d/ships/boat-sail-a.glb#Scene0")]
sail_b: Handle<Scene>,
}
check the answer i provided on this post here
https://stackoverflow.com/a/79512662/5442916
its a pair of functions that get all drive details, including ID's which you can use in conjunction with the other functions mentioned on here to build a more unique Identifier.
There isn’t an out‑of‑the‑box way to do this. Chromium’s build system is designed around producing a full APK (or AAB) for Chrome. Simply switching the target type (from android_apk to android_library) won’t work because many internal GN files (like rules.gni and internal_rules.gni) and other dependencies assume an APK build. In short, converting Chromium to output an AAR would require extensive, non‑trivial modifications across the build configuration and codebase.
As an alternative, consider the officially supported integration methods (such as using WebView or Chrome Custom Tabs) if you need browser functionality in your app.
Resolved by configuration in android/app/build.gradle
buildConfigField "String", "MOENGAGE_APP_ID", "\"YOUR_MOE_APP_ID\"" // Modify the ID here
I am also experiencing similar thing. All the hs-consumer-api endpoints are returning 403 status. I guess now the endpoints need hmac authentication.
@Neel, were you able to find a solution for this?
i added the following lines of code to android\app\src\main\AndroidManifest.xml
<uses-permission android:name="android.permission.BODY_SENSORS" />
<uses-permission android:name="android.permission.HIGH_SAMPLING_RATE_SENSORS" />
Please try to namespace your controller, as sometimes this can be the issue
(depending on your controller's location).
Add this:
namespace App\Controller;
<?php
namespace App\Controller;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Attribute\Route;
class HomeController{
While Zhang's answer would likely work fine using that library, I'm using AspNetCore3 due to being pointed in that direction via the current tutorial and guides. I did some digging into the AspNetCore3 source code and managed to put together a solution following what it does to manage its AccessTokens.
Here's a link to the source file and the relevant line already marked https://github.com/googleapis/google-api-dotnet-client/blob/main/Src/Support/Google.Apis.Auth.AspNetCore3/GoogleAuthProvider.cs#L62 With that it's fairly straightforward to persist what you need.
Sharing Bitbucket Pipelines configurations between repositories is something that has been desired and requested for a long time.
Up until August 2023, it was straight up impossible (as you can also read in this SO post).
Then, in March 2023, Bitbucket stated in the community form they were working on this feature and tracking it in this Jira issue: BCLOUD-14078: Sharing pipeline yml files.
Finally, in August 2023 it's possible, but only for the Premium Plan:
Check async await
I had a similar error.
IDE highlighted the syntax with a warning "await is not needed", but this is not always the case
User impersonation feature is on-boarded to wso2 Identity Server with version 7.1. Please find the impersonation guide from here [1].
[1] https://is.docs.wso2.com/en/next/guides/authorization/user-impersonation/
Add this to the settings at the top of the file:
#+STARTUP: indent
I have to add more characters to this for some reason.
Is there any reason why you can't do this via CSS? That would be the way to do it. If there is any reason why you can't do it with CSS, you can use gtk_text_set_attributes
, but each time you change the size, you should clear the contents of "attributes" and only then add the new contents. This will probably avoid increasing memory usage.
I am the OP of this question. Thank you all for your attention!
I just figured out the reason behind this unexpected behavior. It seems to be related to the Linux kernel configuration -- kernel.shm_rmid_forced
. When I changed it with the following command:
sudo sysctl -w kernel.shm_rmid_forced=0
everything started working as expected.
I hope this answer make sense to you!
A comment: I spent a long time on this with a different code, see https://community.intel.com/t5/Intel-MPI-Library/Crash-using-impi/m-p/1457035/highlight/true. I was/am using many more mpi per node e.g. 64-98. Intel was less than helpful, they denied that it could occur and refused to provide information on the Jenkins code.
My conclusion is that it is (similar to what you indicate) a reproducible intermittent bug in Intel impi. Changing which cluster I used sometimes I could make it work, or changing the MPI layout; in some cases I had 100% failure on a given cluster. I have not tried the MPI FABRICS approach, interesting.
As per @Eljay suggestion I moved the definition of process()
functions after the class declarations and it works fine:
class IdleState : State<StateMachine<TransitionMap, RunState, IdleState>>
{
public:
/* Use parent constructor */
using State<StateMachine<TransitionMap, RunState, IdleState>>::State;
void process() override;
};
class RunState : State<StateMachine<TransitionMap, RunState, IdleState>>
{
public:
/* Use parent constructor */
using State<StateMachine<TransitionMap, RunState, IdleState>>::State;
void process() override;
};
void IdleState::process()
{
std::cout << "IDLE" << std::endl;
state_machine_->emitEvent<IdleState>(StartEvent{});
}
void RunState::process()
{
std::cout << "RUN" << std::endl;
state_machine_->emitEvent<RunState>(StopEvent{});
}
In this article I described in detail how to optimize the loading of a large number of images into Mapbox using a sprite file.
In short, my algorithm of actions was as follows:
- load the sprite and its metadata (a JSON file with information about where each image is located in the file), which we previously generated and stored on a server as static resources
- create an OffscreenCanvas and add the loaded sprite (image) to it
const canvas = new OffscreenCanvas(width, height);
const ctx = canvas.getContext('2d', { willReadFrequently: true });
ctx.drawImage(image, 0, 0);
- for each required image, get it from the canvas as an image and add it to the mapbox
const imageData = ctx.getImageData(x, y, width, height);
map.addImage(imageId, imageData)
import cv2
import numpy as np
# Load images using OpenCV
user_img = cv2.imread(user_image_path)
hamster_img = cv2.imread(hamster_image_path)
# Convert the hamster image to RGBA to handle transparency
hamster_img = cv2.cvtColor(hamster_img, cv2.COLOR_BGR2BGRA)
# Extract the heart and text area from the hamster image
mask = np.all(hamster_img[:, :, :3] > 200, axis=-1) # Assuming the white background is near (255,255,255)
hamster_img[mask] = [0, 0, 0, 0] # Make background transparent
# Resize user image to fit the hamster position
user_resized = cv2.resize(user_img, (hamster_img.shape[1], hamster_img.shape[0]))
# Convert user image to RGBA for transparency handling
user_resized = cv2.cvtColor(user_resized, cv2.COLOR_BGR2BGRA)
# Merge the user image with the hamster image, keeping the heart and text
result = np.where(hamster_img[:, :, 3:] == 0, user_resized, hamster_img)
# Save and display the result
output_path = "/mnt/data/edited_image.png"
cv2.imwrite(output_path, result)
# Return the edited image path
output_path
For me, Google Colab crashed when I was trying to create an ROC graph using matplotlib. Try commenting this out and see if your code runs without crashing colab.
Muito obrigado, salvou meu laboratório!!!
document.querySelector('#<scriptTagIdHere>').textContent
Maybe just because you are doing the latest version, the developer just simply changed the dimension ordering of argu. Many AI models which trained on older documentation still reference the previous format
For those who came from Google:
I’m not sure since when, nor could I find any documentation about this or other variables.
A solution to this is to use a type hint for row as follows:
with open(infile, encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile) # Read CSV as a dictionary
for row in reader:
print(row)
row: dict[str, str] # type hint which specifies that row is a dict with string keys and values
symbol = row['SYMBOL']
print(symbol)
A reset()
method was added in Enlighten 1.14.0.
I HAAAATE this new way of setting up PHPMailer:
I dont use composer,
the "use" statements fail,
you cant just include anymore
im on shared hosting so getting help from them is a bitch!
Why o why did they make it harder!
It was easier before now its a drag
wasted 4 days, still not working !!!!
from PIL import Image, ImageFilter
# Load image
image_path = "/mnt/data/file-UDdQuDEYdHCb6gmQfKs4mH"
image = Image.open(image_path)
# Apply blur to background while keeping the central subject clear
blurred = image.filter(ImageFilter.GaussianBlur(radius=10))
# Enhance the subject (assuming central focus)
sharp = image.filter(ImageFilter.UnsharpMask(radius=2, percent=150, threshold=3))
# Blend the two images (sharp in center, blurred outside)
enhanced_image = Image.blend(blurred, sharp, alpha=0.7)
# Save and display the result
output_path = "/mnt/data/enhanced_image.jpg"
enhanced_image.save(output_path)
output_path
If click Eyes icon Show/Hide password (QPushButton)
Asked 3 years, 8 months ago
Modified 4 months ago
Viewed 4k times
1
I'm trying to create a function in a register and login form using QLineEdit to show and hide password if click a QPushButton. I'm a beginner in Python, I'm just trying to do it but it's very hard... My attempt is not good because if I click the eye button the password is shown, but if click again to hide it does not work.
from PyQt5 import QtCore, QtGui, QtWidgets, uic
from PyQt5.QtWidgets import QPushButton, QLineEdit
import sys
import pymysql
pymysql.install_as_MySQLdb()
class MyWindow(QtWidgets.QMainWindow):
def \__init_\_(self, maxWidth=None):
super(MyWindow, self).\__init_\_()
uic.loadUi('MainWindow.ui', self)
self.eyepass_show()
self.eyepass_hide()
self.btn_show_pwd.clicked.connect(self.eyepass_hide)
self.btn_show_pwd.clicked.connect(self.eyepass_show)
def eyepass_show(self):
self.line_password.setEchoMode(QLineEdit.Normal)
print('show pass')
def eyepass_hide(self):
self.line_password.setEchoMode(QLineEdit.Password)
print('hide pass')
if _name_ == '_main_':
app = QtWidgets.QApplication(sys.argv)
window = MyWindow()
window.show()
sys.exit(app.exec\_())
Open Firebase Project > in project overview (project settings) > Service accounts >Manage service account permission.
click on terminal on top right and machine is running.
After that, type these:
echo '[{ "origin": ["*"], "method": ["GET", "HEAD"], "maxAgeSeconds": 3600, "responseHeader": ["Content-Type", "Access-Control-Allow-Origin"] }]' > cors.json
then write these and replace the address (football-abcd) with your firebase storage bucket.
gsutil cors set cors.json gs://football-abcd.firebasestorage.app
Congratulations.
Setting CORS on gs://football-abcd.firebasestorage.app/...
It works when i start my server with:
npx ts-node --files src/index.ts
or
npx ts-node -T src/index.ts
Had the same issue. I did a fresh install of anaconda, and suddenly everything worked fine. https://www.anaconda.com
Thanks Cyril Gandon <3 for providing the perfect answer to it.
In WM_NCLBUTTONDOWN , (if called before the base.WndProc) you can force m.WParam to 1 (client area) in place of 2(title area). This will avoid the capture of the mouse by the desktop manager.
I do the same for WM_NCLBUTTONUP, and use a timer based on GetDoubleClickTime() to manage either click or double click...
I changed @workingdogsupportUkraine code. The only thing is that it is not showing keyboard just by tapping on the search button. It is showing cancel button after I changed the code.
struct SearchbarView: View {
@Binding var text: String
@State private var showSearch: Bool = false
var onSubmit: () -> Void
var body: some View {
VStack {
HStack {
Spacer()
if showSearch {
SearchBar(text: $text, showSearch: $showSearch, onSubmit: onSubmit)
.frame(width: 350, height: 40)
} else {
Image(systemName: "magnifyingglass")
.onTapGesture {
showSearch = true
}
}
}
}
}
}
struct SearchBar: UIViewRepresentable {
@Binding var text: String
@Binding var showSearch: Bool
var onSubmit: (() -> Void)
func makeUIView(context: Context) -> UISearchBar {
let searchBar = UISearchBar()
searchBar.isEnabled = true
searchBar.searchBarStyle = .minimal
searchBar.autocapitalizationType = .none
searchBar.placeholder = "Search"
searchBar.delegate = context.coordinator
searchBar.setShowsCancelButton(true, animated: true)
return searchBar
}
func updateUIView(_ uiView: UISearchBar, context: Context) {
uiView.text = text
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, UISearchBarDelegate {
let parent: SearchBar
init(_ parent: SearchBar) {
self.parent = parent
}
func searchBar(_ searchBar: UISearchBar, textDidChange searchText: String) {
searchBar.showsCancelButton = true
parent.text = searchText
}
func searchBarSearchButtonClicked(_ searchBar: UISearchBar) {
searchBar.resignFirstResponder()
searchBar.showsCancelButton = true
searchBar.endEditing(true)
parent.onSubmit()
}
func searchBarCancelButtonClicked(_ searchBar: UISearchBar) {
parent.text = ""
searchBar.resignFirstResponder()
searchBar.showsCancelButton = false
searchBar.endEditing(true)
parent.showSearch = false
}
func searchBarShouldBeginEditing(_ searchBar: UISearchBar) -> Bool {
searchBar.showsCancelButton = true
return true
}
}
}
Please help me with keyboard focus and cancel should be highlighted after I tapped the search button. Now by tapping on cancel it also dismisses the search.
Here’s an approach with Python script using pandas
and json
to transform your data frame into the required JSON structure.
import pandas as pd
import json
# Sample DataFrame
df = pd.DataFrame({
'type': ['customer'] * 15,
'customer_id': ['1-0000001'] * 4 + ['1-0000002'] * 6 + ['1-0000003'] * 5,
'email': ['[email protected]'] * 4 + ['[email protected]'] * 6 + ['[email protected]'] * 5,
'# of policies': [4] * 4 + [6] * 6 + [5] * 5,
'POLICY_NO': ['000000001', '000000002', '000000003', '000000004',
'000000005', '000000006', '000000007', '000000008', '000000009', '000000010',
'000000011', '000000012', '000000013', '000000014', '000000015'],
'RECEIPT_NO': [420000001, 420000002, 420000003, 420000004,
420000005, 420000006, 420000007, 420000008, 420000009, 420000010,
420000011, 420000012, 420000013, 420000014, 420000015],
'PAYMENT_CODE': ['RF35000000000000000000001', 'RF35000000000000000000002', 'RF35000000000000000000003', 'RF35000000000000000000004',
'RF35000000000000000000005', 'RF35000000000000000000006', 'null', 'RF35000000000000000000008', 'RF35000000000000000000009', 'null',
'RF35000000000000000000011', 'RF35000000000000000000012', 'null', 'RF35000000000000000000014', 'RF35000000000000000000015'],
'KLADOS': ['Αυτοκινήτου'] * 15
})
# Group by 'type' and 'customer_id'
grouped_data = []
for (cust_type, cust_id), group in df.groupby(['type', 'customer_id']):
attributes = {
"email": group['email'].iloc[0],
"# of policies": int(group['# of policies'].iloc[0]), # Convert to int
"policies details": group[['POLICY_NO', 'RECEIPT_NO', 'PAYMENT_CODE', 'KLADOS']].to_dict(orient='records')
}
grouped_data.append({
"type": cust_type,
"customer_id": cust_id,
"attributes": attributes
})
# Convert to JSON and save to file
json_output = json.dumps(grouped_data, indent=4, ensure_ascii=False)
# Print the output
print(json_output)
Group by type
and customer_id
→ Ensures customers are uniquely identified.
Extract email
and # of policies
from the first row since these values are consistent within each group.
Convert policy details to a list of dictionaries using .to_dict(orient='records')
.
Store the structured data in a list.
Dump the JSON with indent=4
for readability and ensure_ascii=False
to retain Greek characters.
[
{
"type": "customer",
"customer_id": "1-0000001",
"attributes": {
"email": "[email protected]",
"# of policies": 4,
"policies details": [
{
"POLICY_NO": "000000001",
"RECEIPT_NO": 420000001,
"PAYMENT_CODE": "RF35000000000000000000001",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000002",
"RECEIPT_NO": 420000002,
"PAYMENT_CODE": "RF35000000000000000000002",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000003",
"RECEIPT_NO": 420000003,
"PAYMENT_CODE": "RF35000000000000000000003",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000004",
"RECEIPT_NO": 420000004,
"PAYMENT_CODE": "RF35000000000000000000004",
"KLADOS": "Αυτοκινήτου"
}
]
}
},
{
"type": "customer",
"customer_id": "1-0000002",
"attributes": {
"email": "[email protected]",
"# of policies": 6,
"policies details": [
{
"POLICY_NO": "000000005",
"RECEIPT_NO": 420000005,
"PAYMENT_CODE": "RF35000000000000000000005",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000006",
"RECEIPT_NO": 420000006,
"PAYMENT_CODE": "RF35000000000000000000006",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000007",
"RECEIPT_NO": 420000007,
"PAYMENT_CODE": "null",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000008",
"RECEIPT_NO": 420000008,
"PAYMENT_CODE": "RF35000000000000000000008",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000009",
"RECEIPT_NO": 420000009,
"PAYMENT_CODE": "RF35000000000000000000009",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000010",
"RECEIPT_NO": 420000010,
"PAYMENT_CODE": "null",
"KLADOS": "Αυτοκινήτου"
}
]
}
},
{
"type": "customer",
"customer_id": "1-0000003",
"attributes": {
"email": "[email protected]",
"# of policies": 5,
"policies details": [
{
"POLICY_NO": "000000011",
"RECEIPT_NO": 420000011,
"PAYMENT_CODE": "RF35000000000000000000011",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000012",
"RECEIPT_NO": 420000012,
"PAYMENT_CODE": "RF35000000000000000000012",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000013",
"RECEIPT_NO": 420000013,
"PAYMENT_CODE": "null",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000014",
"RECEIPT_NO": 420000014,
"PAYMENT_CODE": "RF35000000000000000000014",
"KLADOS": "Αυτοκινήτου"
},
{
"POLICY_NO": "000000015",
"RECEIPT_NO": 420000015,
"PAYMENT_CODE": "RF35000000000000000000015",
"KLADOS": "Αυτοκινήτου"
}
]
}
}
]
I hope this information is helpful. Please let me know if it works for you or if you need any further clarification.
The current version of sqlcmd (https://github.com/microsoft/go-sqlcmd) no longer has a file size limitation.
Jwts.parserBuilder()
.setSigningKey(yoursigningkey)
.build()
.parseClaimsJws(token);
can someone explain me this process each and every step what eact step means and used for.
Why do you think it should?
Documentation says that SemaphoreSlim: Blocks the current thread until it can enter the SemaphoreSlim.
Release method docs are not very clear but I would just expect that your big number should just release current and other threads. See remarks section.
That sounds like an interesting optimization! Have you looked into using ss
or lsof
to monitor active connections to your process? You could periodically check for connections and trigger SIGSTOP/SIGCONT accordingly. Would a combination of netstat
(or ss
) with a simple script work for your use case, or are you looking for a more event-driven solution like epoll
or inotify
on /proc/net/tcp
?
Add these in proguard rules
-dontwarn com.facebook.infer.annotation.Nullsafe$Mode
-dontwarn com.facebook.infer.annotation.Nullsafe
you should go to this path: Right click on your project -> Properties -> omnet++ -> Makemake -> click src -> options -> compile tab -> enable "Add include path exported from referenced projects".
I hope it is useful
Here is the comparison video between string concatenation vs string interpolation
https://www.youtube.com/watch?v=ykgw1xvIYuE
Hope this helps
You can refer to the following URL
https://github.com/tensorflow/tensorflow/issues/86953#event-16275455512
This seems to be a problem with keras
I used this method to solve it before
You can try it
But in recent days, colab TPU seems to have problems and I can't connect to TPU
Using an OBB (Opaque Binary Blob) file is mainly beneficial for apps like HiTV that provide extensive media content, such as movies, dramas, and live streaming. Since Google Play limits APK sizes to 100MB (previously 50MB), OBB allows storing additional assets like high-quality video previews, UI graphics, and other large data files, enabling a smoother user experience.
However, OBB files require additional handling, such as using the Google Play Expansion File API or a custom downloader. If your app targets devices below Android 2.3, compatibility issues may arise, and attempting to load an OBB file on such devices could lead to exceptions. To ensure a seamless experience for HiTV users, consider fallback mechanisms like streaming assets dynamically instead of relying solely on OBB storage.
I have encountered similar situations and tried to explain the solution here.
https://mcuslu.medium.com/aws-dynamodb-json-data-import-some-tips-and-tricks-fb00d9f5b735
Same here, accesing ESIOS
curl works, but for some reason requests 2.32.3 fails with 403 code.
Any workaround?
From reading https://github.com/microsoft/vscode/blob/116b986f778e4473bcd658e5fbb8d6c7d71c1be4/src/vs/workbench/contrib/chat/browser/media/chatStatus.css#L54, it's part of a (LLM) chat quota indicator.
The sound you're hearing is the terminal bell in VSCode. You can disable it by modifying your VSCode settings. Here’s how:
1. Open VSCode and go to Preferences → Settings** (or press ⌘+,)
2. In the search bar, type “terminal bell” or “enableBell.”
3. Find the setting Terminal › Integrated: Enable Bell and uncheck it.
4. Alternatively, you can open your `settings.json` file and add the following line: "terminal.integrated.enableBell": false
This will disable the beep sound in the integrated terminal. If you still experience any sounds, it might be coming from your shell configuration, so check your shell settings as well.
In my case, I had to include a custom user-agent header along with either acceptJson()
or accept('application/json')
.
\Illuminate\Support\Facades\Http::acceptJson()->withHeaders([
'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36'
])->get($url);
Would prefer python to delete bulk snapshots across all AWS accounts under AWS organisation.
Below is the blog.
Under @nakahide3200's adivce, I found it was the allanoricil.nuxt-vscode-extension-0.0.21
extension that caused this problem.
here is how I found it:
$ cd \~/.vscode/extensions
$ ls | grep nuxt
allanoricil.nuxt-vscode-extension-0.0.21
$ rm -rf allanoricil.nuxt-vscode-extension-0.0.21
Thank you @nakahide3200 very much!
Here is the script:
import pandas as pd
import networkx as nx
data = {
"Product": ["A", "B", "C", "D", "E"],
"Selling_Locations": [[1, 2, 3], [2, 5], [7, 8, 9, 10], [5, 4], [10, 11]]
}
df = pd.DataFrame(data)
G = nx.Graph()
for product in df["Product"]:
G.add_node(product)
for i in range(len(df)):
for j in range(i + 1, len(df)):
if set(df["Selling_Locations"][i]) & set(df["Selling_Locations"][j]):
G.add_edge(df["Product"][i], df["Product"][j])
groups = list(nx.connected_components(G))
for i, group in enumerate(groups, 1):
print(f"Group {i}: {sorted(group)}")
Ouput:
This was solved by spark support - the issue was that in PADDLE, product catalog, you should not specify a number for trial days.
It's great to see your structured approach to organizing an Android project! Your thoughtful exploration of MVC in Android shows a strong commitment to clean architecture, which is essential for maintainable and scalable apps.
https://iimskills.com/medical-coding-courses-in-delhi/[iimskills medical coading cources in delhi](https://iimskills.com/medical-coding-courses-in-delhi/)
If the bin is private, you might also need an access key which you can add to your headers. Below is straight from the manual: https://jsonbin.io/api-reference/bins/read "X-Access-Key Required You can now access your private records with X-Access-Key header too. Refer to the FAQs for more information on X-Access-Key. Make sure you've granted Bins Read Access Permission to the Access Key you'll be using. You can create Access Keys on API Keys page."
If anyone still has this problem, the trick is to set pages first, posts second, then it works.
If anyone runs into the same problem:
The only solution I found was by switching to Azure Flexible Consumption plan to allow for vnet integration and then using a vnet / service endpoint to let the Azure Function access KeyVault secrets.
Thanks a lot. Yes, using the .
solved the problem. Many thanks again for taking the time.
In my case ,Uninstalling my Homebrew version with: brew uninstall shopify-cli
works for me.
As of 2025, you can just pip install triton-windows
More information on installing and troubleshooting is at https://github.com/woct0rdho/triton-windows
go to https://github.com/Purfview/whisper-standalone-win/releases/tag/libs download cuBLAS.and.cuDNN_CUDA12_win_v2.7z
and add it do your cuda bin
I manage to fix this problem. The the new problem what i have its, this extension cannot update the quantity from stock. How to fix this?
Since the 'id' column in both tables so, you have to specify the table for the name property in the where clause you're referring to .
Node.js as of V22 supports running .ts files natively with --experimental-strip-types
flag.
I've got it working locally, was pretty straight forward to adjust my code, just need to follow a few rules here
In the AWS Lambda config I have added an environment variable NODE_OPTIONS
with --experimental-strip-types
and I've changed the runtime settings to handler.ts.handler
, but I get the same error as above.
I feel like it should work but just missing some link.
import 'package:retrofit/retrofit.dart';