Your just need this.
interface Foo extends IController {}
removing <p class=MsoNormal><o:p> </o:p></p>" worked.
There is no alternative API like Spotify but you can analyze audio by yourself, using Essentia models or this one (https://reccobeats.com/docs/apis/extract-audio-features)
The background of Stack Screen is not transparent, it has a default white background. There is one thing you might miss that if you navigate to a new screen, the navigator will have a default animation that stack the new screen above the previous one like stacking cards, so screens need to be non-transparent to cover the previous ones.
If you want to apply same background on all screens, you can create a screenLayout and use it within each of your screen component.
Try to replace !!showWarning &&
by !!showWarning ?
It appears that the 9 cycles taken by the register value data passing through the load unit, mul unit, and add unit constitute the actual CPE (cycles per element) or critical path, rather than the xpwr path on the left.
However, these 9 cycles are only incurred during the first iteration of the loop. Each subsequent iteration requires just 5 cycles, as shown in the diagram:
Paths marked with the same color in the diagram indicate parallel execution. We can observe that since the mul operation takes 5 cycles, the data's add+load operations and res's add operation can complete within this mul cycle. Specifically:
Thus, the slowest operation (and therefore the critical path) in each iteration remains the 5-cycle mul operation for xpwr.
I also want to convert rvt file into ifc. I have uploaded rvt file on autodesk but not able to convert into ifc.
Can you suggest How can we convert like which autodesk forge API should be use
I know it's super late, but I had this same issue and I manage to copy those .graphql files with these scripts
"build": "rm -rf dist && tsc && copyfiles -u 1 'src/**/*.graphql' dist/",
"start": "npm run build && node ./dist/index.js"
Query by GQL:
SELECT Count(*) FROM `kind`
Do you have found the answer? I am have the same Problem.
Cloud computing refers to the delivery of computing services over the internet, including storage, servers, databases, networking, software, and analytics. It allows organizations to access and manage data and applications without the need for physical infrastructure, leading to cost savings and operational efficiency.
When it comes to comparing different types of clouds, there are three main models:
Public Cloud: Hosted by third-party providers, public clouds like AWS, Azure, and Google Cloud offer shared infrastructure accessible over the internet. They are cost-effective and scalable, making them suitable for businesses looking to minimize infrastructure costs.
Private Cloud: A private cloud is dedicated to a single organization, providing enhanced control, security, and customization. It is ideal for companies with stringent data security and regulatory requirements.
Hybrid Cloud: Hybrid clouds combine public and private clouds, allowing data and applications to move seamlessly between the two. This model offers flexibility, enabling businesses to optimize costs while maintaining data security.
For businesses seeking robust and reliable cloud solutions, sify technologies cloud services provide comprehensive offerings across public, private, and hybrid clouds. With a focus on security, scalability, and seamless integration, Sify ensures that businesses can harness the power of cloud computing effectively.
To modify the script for a 128-bit nonce and 136-bit private key:
Set ell = 128 to reflect the nonce size.
Heuristically set b1 = 2^72 and c1 = 2^64 (refine based on the exact leakage model).
Increase the matrix size to 35x35 and adjust loop ranges (17 bytes for d d d, 16 bytes for k k k).
Update matrix indices to match the new size.
The computation of b1 and c1 depends on the leakage model, not the curve, but the curve order n n n must be considered in the lattice.
The exact values of b1 and c1 require the specific challenge context (e.g., AMOSSYS CTF). Without it, the heuristic values may need testing. The attack is curve-agnostic as long as n>2136 n > 2^{136} n>2136. For ECDSA or nonce bits set to 1, additional modifications are needed, but the latter may invalidate the signature.
Alternative for Spotify API, No authentication required
I'D found how to resolve the problem which is How to add a DSN from Database in Transport with Symfony Mailer...
I'd created a service like that:
namespace App\Services;
use Doctrine\ORM\EntityManagerInterface;
use App\Entity\IdentifiantMailing;
use Symfony\Component\Mailer\Mailer;
use Symfony\Component\Mailer\Transport;
use Symfony\Component\Mime\Email;
use Symfony\Component\Mime\Part\DataPart;
use Twig\Environment;
class EmailSender
{
private $em;
private $adresse;
private $mdp;
private $serveur;
private $port;
private $from;
private $twig;
public function __construct(EntityManagerInterface $em, Environment $twig)
{
$this->em = $em;
$IdentifiantRepos = $this->em->getRepository(IdentifiantMailing::class);
$identifiant = $IdentifiantRepos->findAll();
$this->adresse = $identifiant[0]->getAdresse();
//$this->adresse = rawurlencode($this->adresse);
$this->mdp = $identifiant[0]->getMotDePasse();
//$this->mdp = rawurlencode($this->mdp);
$this->serveur = $identifiant[0]->getServeur();
$this->port = $identifiant[0]->getPort();
$this->from = $identifiant[0]->getDepuis();
$this->twig = $twig;
}
public function sendEmail($sender, $destinataire, $subject, $pathTwigTemplate, $PJ, $context)
{
$dsn = sprintf('smtp://%s:%s@%s:%d', $this->adresse, $this->mdp, $this->serveur, $this->port);
$transport = Transport::fromDsn($dsn);
$mailer = new Mailer($transport);
if(null===$context){
if(null===$PJ){ //$context est nul et $PJ est nul
$email = (new Email())
->from($sender)
->to($destinataire)
->subject($subject)
->html($this->twig->render($pathTwigTemplate));
$headers= $email->getHeaders();
$headers->addTextHeader('X-Auto-Response-Suppress', 'OOF', 'DR', 'RN', 'NRN', 'AutoReply');
} else {//$context est nul et $PJ existant
$email = (new Email())
->from($sender)
->to($destinataire)
->subject($subject)
->html($this->twig->render($pathTwigTemplate))
->attachFromPath($PJ);
$headers= $email->getHeaders();
$headers->addTextHeader('X-Auto-Response-Suppress', 'OOF', 'DR', 'RN', 'NRN', 'AutoReply');
}
} else {
if(null===$PJ){ //si $context est existant et $PJ est null
$email = (new Email())
->from($sender)
->to($destinataire)
->subject($subject)
->html($this->twig->render($pathTwigTemplate, $context));
$headers= $email->getHeaders();
$headers->addTextHeader('X-Auto-Response-Suppress', 'OOF', 'DR', 'RN', 'NRN', 'AutoReply');
} else {//si $context est existant et $PJ est existant
$email = (new Email())
->from($sender)
->to($destinataire)
->subject($subject)
->html($this->twig->render($pathTwigTemplate, $context))
->attachFromPath($PJ);
$headers= $email->getHeaders();
$headers->addTextHeader('X-Auto-Response-Suppress', 'OOF', 'DR', 'RN', 'NRN', 'AutoReply');
}
}
$mailer->send($email);
}
}
An I use this like That:
namespace App\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Attribute\Route;
use App\Services\EmailSender;
final class TestSymf7Controller extends AbstractController
{
#[Route('/test', name: 'test')]
public function index(): Response
{
return $this->render('test/index.html.twig', [
'controller_name' => 'TestSymf7Controller',
]);
}
#[Route('/testmessage', name: 'testmessage')]
public function sendEmail(EmailSender $emailSender): Response
{
/*******************************Preparation de l'email***************************
******************to consume messenger in dev run
*******************php bin/console messenger:consume async -vv**************************************/
//must be not real sender for wouldn't answering or may be real for no spams emails client routing ???
$sender = '[email protected]';
$destinataire = '[email protected]';
$subject = 'Exemple d\'e-mailtest1';
//must be in /templates/emails/..
$pathTwigTemplate = '/emails/inscriptionok.html.twig';
//must be in /public/piecesemails/..
//$PJ = '/public/piecesemails/pj.pdf';$PJ = $this->getParameter('kernel.project_dir') . $PJ;
// ou de type $PJ = null
$PJ = null;
//$context de type array ou = null
$context = ['email' => '[email protected]'];
//$context = null;
$emailSender->sendEmail($sender, $destinataire, $subject, $pathTwigTemplate, $PJ, $context);
/************************************email envoyé********************************/
return new Response(
'<html><body>OK Done !</body></html>'
);
}
}
It Run...
For more the Entity is like that :
namespace App\Entity;
use App\Repository\IdentifiantMailingRepository;
use Doctrine\ORM\Mapping as ORM;
#[ORM\Entity(repositoryClass: IdentifiantMailingRepository::class)]
class IdentifiantMailing
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column(type:"integer")]
private $id;
#[ORM\Column(type:"string", length:255)]
private $adresse;
#[ORM\Column(type:"string", length:255)]
private $motDePasse;
#[ORM\Column(type:"string", length:255)]
private $serveur;
#[ORM\Column(type:"string", length:6)]
private $port;
#[ORM\Column(type:"string", length:255)]
private $depuis;
#[ORM\Column(type:"text")]
private $message;
public function getId(): ?int
{
return $this->id;
}
public function getAdresse(): ?string
{
return $this->adresse;
}
public function setAdresse(string $adresse): self
{
$this->adresse = $adresse;
return $this;
}
public function getMotDePasse(): ?string
{
return $this->motDePasse;
}
public function setMotDePasse(string $motDePasse): self
{
$this->motDePasse = $motDePasse;
return $this;
}
public function getServeur(): ?string
{
return $this->serveur;
}
public function setServeur(string $serveur): self
{
$this->serveur = $serveur;
return $this;
}
public function getPort(): ?string
{
return $this->port;
}
public function setPort(string $port): self
{
$this->port = $port;
return $this;
}
public function getDepuis(): ?string
{
return $this->depuis;
}
public function setDepuis(string $depuis): self
{
$this->depuis = $depuis;
return $this;
}
public function getMessage(): ?string
{
return $this->message;
}
public function setMessage(string $message): self
{
$this->message = $message;
return $this;
}
}
Problem solved. Answer: deactivate ZHA
This didn't work for me, i'm still searching an answer for this.
sudo npm install -g nodemon then check its installed using nodemon -v
Are we posting AI answers now? sumon mia
Is @angular/flex-layout deprecated or no longer compatible with Angular 18?
Ans: Yes, you can check the deprecation on @angular/flex-layout - npm.js. The last version published is 15.0.0-beta.42
so it will work only up to Angular 15.
What are the recommended alternatives for responsive layout in Angular 18? Is there any migration guide from fxLayout to CSS/Tailwind/CDK?
These are the details provided in their GitHub repo:
NOTE: The Angular team no longer publishes new releases of this project. Please review this blog post for alternatives, and this article for the explanation and next steps.
Here's an idea. Not sure if it is optimal but it should be correct.
of course, you'd insert tuples of (value, parent_set) into the heap so you know which value came from which set once they're popped from the heap and needs to be replaced
So apparently, I had to spm_encode my input text first, then run the above commands.
I am getting this error "No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.)." when i do blob trigger, the code is running fine in local after azure deployment, throwing this error.
Be aware, # is not a special character, you do not need to escape it with %
The correct answer for your first question is lehtmi's, Using frontier pattern is the way to go, and yes it does work on latest 5.1 and luajit to make a match using string.match
string.match(str, "%f[^\n\r\0]###?[^#\r\n\0]+")
Should yield ##First line
How could I go about matching a pattern that starts with "line start" in Lua?
The simplest is to split the string into lines
for line in str:gmatch"[^\r\n]*[^\n]?" do
if line:find"^###?.+" then -- stuff here
end
end
Should be enough.
When you run below query, your are asking mysql
to display the result of database()
function. (Reference).
mysql> select database();
But in the below case, mysql
throws an error as no database is selected till this point.
mysql> select databese();
ERROR 1046 (3D000): No database selected
When you have selected a database, mysql
looks for a function named databese()
within your DB but since it is not defined it throws the given error.
mysql> select databese();
ERROR 1305 (42000): FUNCTION hoge.databese does not exist
Do this to sort your selection in the Visual Studio code editor
Tools->Option->Environment->Keyboard
I reassigned the shortcut for Edit.SortLines from (Shift-Alt-S) to (Ctrl-S Ctrl-S)
(Shift-Alt-S) activates the File menu which can be frustrating and in my case the cause for reassignment
Anything in the selection will be sorted. If you keep invoking the shortcut on the same selection it will sort backwards and forwards until your heart desires.
That's any selection not just Enumerators.
My code is too much fragment to be posted as it is for a solution here, but what i essentially do is recursively go into each $result and then see where is the 'english' appearing & handle it accordingly, all dynamically. Code is work in progress and we are testing it now, but it works!!
foreach ( recursiveFind($tempF, 'english') as $result) {
For the long term, we shall refactor and find a much better & scalable solution. Thank you all once again!
Can you make controller class mocked instead of other? Unit testing on a real DB isn't a good idea. Just use controller public methods like if you are using API. I had a unit test like that:
@ExtendWith(MockitoExtension.class)
class TestLaunch {
@Mock
private ReposMineserver mineservers;
@Mock
private ReposTariff tariffs;
@Mock
private ServiceMinecraftServerObserver observers;
@Mock
private ServiceHandlers handlers;
@InjectMocks
private RootController controller;
@Test
void launch_serverExists_shouldReturnOk() {
Config.PATH_TO_SERVERS = "test/";
Integer id = 2;
Mineserver mineserver = new Mineserver();
mineserver.setId(id);
mineserver.setIdTariff(id);
Tariff tariff = new Tariff();
tariff.setCpuThreads((short)2);
tariff.setHoursWorkMax(9999);
tariff.setMemoryLimit((long)9999);
tariff.setRam((short)4);
lenient().when(handlers.get(2)).thenReturn(new MinecraftHandler(mineserver, tariff));
lenient().when(mineservers.findById(id)).thenReturn(Optional.of(mineserver));
lenient().when(tariffs.findById(id)).thenReturn(Optional.of(tariff));
// Test data files (emit minecraft)
File file = new File("test/server_2");
file.mkdirs();
file = new File("test/server_2/run.sh");
System.out.println(file.getAbsolutePath());
if (!file.exists()) {
try {
file.createNewFile();
FileWriter f = new FileWriter(file);
f.append("while true; do echo 123; done");
f.close();
} catch (IOException e) {
e.printStackTrace();
}
}
file = new File("test/server_2/server.properties");
System.out.println(file.getAbsolutePath());
if (!file.exists()) {
try {
file.createNewFile();
FileWriter f = new FileWriter(file);
f.append("max-players=2\nserver-port=25565\nquery.port=25565");
f.close();
} catch (IOException e) {
e.printStackTrace();
}
}
ResponseEntity<Void> response = controller.launch(id);
assertEquals(HttpStatus.OK, response.getStatusCode());
try {
Thread.sleep(200);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
String errOutput = errContent.toString();
System.out.println("Captured error output: " + errOutput);
assertEquals(true, controller.is_alive(id).getBody());
}
private final ByteArrayOutputStream errContent = new ByteArrayOutputStream();
private final PrintStream originalErr = System.err;
@BeforeEach
public void setUp() {
System.setErr(new PrintStream(errContent));
}
@AfterEach
public void restoreStreams() {
System.setErr(originalErr);
}
}
Integration testing may be including http requests to a real server with database, but unit testing with a real db makes errors while "maven clean install". All your data in test must be independent from the enviroment
The following reference link will certainly assist in implementing the Swagger Aggregator using Spring Cloud Gateway
https://www.baeldung.com/spring-cloud-gateway-integrate-openapi
Hope that helps. I am happy to assist you further.
Thanks,
Mahendra
I figured out what was going on. It turns out that MySQL tries to look for a user-defined function when no built-in function matches, and if no database is selected, it throws the "No database selected" error.
on the application overview page of your app in Partner Center, you'll find a section called manage package flights. Upload your package there and assign a group to this flight. If you don't have groups so far, you'll find a button to create one.
Wenn certification is done and app is published, it will only be delivered to those users in the group. They need to be signed in to the Microsoft store, of course.
With Serenity framework, config in "serenity.conf":
chrome.preferences {
credentials_enable_service = false
profile.password_manager_enabled = false
profile.password_manager_leak_detection = false
}
You need to go in the project folder directly, not in xcode, then click on info.plist. This will now make the file visible in xcode. And you can right click and select edit as source code now.
build file location refrence is in memory, in task manager clean all node.exe
.
It is recommended to use SmartCodable. None of this is a problem. Support type conversion and fault-tolerant processing. Even support inheritance
Change the "android:name="com.google.android.gms.wearable.MESSAGE_RECEIVED" action from your manifest to "android:name="com.google.android.gms.wearable.DATA_CHANGED"...
I don't know whether in fluter is the same, but usually you have to set a data tag to the intent, e.g.:
<data android:scheme="wear" android:host="*"
android:path="/start-activity"
You can force the Firestore client to use the REST transport instead of gRPC. Follow below steps
Install the required packages
Set the transport to REST when creating the Firestore client
Authenticate as usual (using service account credentials)
In your HTML, inside of your <head>
(after <title></title>
) you will need to use
<link rel= "icon" href="./Images/image name.png" type= "image/png" />
replace "image name" with your actual image name
here is the code that I have tested it with
<title>Dark Mode Toggle</title>
<link rel="icon" href="./Images/yorha crewpic.png" type="image/png" />
This works fine with me.
Well, it seems that you have to either pass a second type parameter or pass no type parameters at all:
searchBar = viewChild<SearchBarComponent, ElementRef>('searchBar', { read: ElementRef });
Or
searchBar = viewChild('searchBar', { read: ElementRef });
This is TypeScript behavior. Since the read option relies on the second type, it either has to be specified or no types should be specified - at which TypeScript will infer the type from whatever you pass in.
I'm not very familiar with how discord bots work but there seems plenty of good documentation on their API
https://discord.com/developers/docs/intro
If you really want to stick with using Python it is worth checking out webhooks for discord. Python seems to have a library for that as well:
https://pypi.org/project/discord-webhook/
Python is a interpreted language that runs on a virtual environment, pip is an excellent tool for handling all the necessary packages needed to run a python script.
More information on how to set up a virtual environment:
https://docs.python.org/3/library/venv.html
It is best you try and figure out what you want your bot to do first as there are probably lots of different way to accomplish your goal. In the future it helps to ask more specific questions about what you want your bot to do, which language/libraries you want to use, exactly what the problem you are encountering as well as providing code that you've written. Best of luck!
I met this problem too. You should check your fastapi version and then find it's Starlette version, now you could find python-multipart package version at Starlette's requirements. Install the correct version of python-multipart and problem solved.
Had some trouble understanding the "why?" of these answers and their differences and wanted to give an explainer/re-summation based on my additional 1 hour rabbit hole for anyone else based on when each is needed. First,
maven-jar-plugin
- create a jar of ONLY your codemaven-assembly-plugin
- create a jar of your code AND direct copies of all the code you used. (also, it can do additional weird packaging requirements afterwards)maven-shade-plugin
- create a jar of your code AND direct copies of all the code you used AND rename/refactor all of that other code to prevent potential naming conflicts from other uber jar libraries being used concurrently.My main question was why were all 3 still actively maintained, if the shade-plugin is just better, why bother with the others existing? And the answer seems to lie in the three different audiences that these libraries assist.
Any traditional library being uploaded to a maven repo is going to aim for an emphasis on the slimmest possible jar ONLY containing their requirements, instead leaning on the pom.xml to define what people should download from elsewhere. (avoiding the nightmare of shipping every package)
When I personally wanted to ship a quick executable jar for a script I was using along with a runtime I leaned on the assembly jar. It can effectively package up a fat-jar with all the dependent classes from other jars copied into your main jar, but also can do additional steps after, like taking the jar, and a jre, and some helpdocs into a zip file, that's the assembly plugin's strength.
The assembly plugin is also much more straightforward, it does basic dependency management and packaging and that's it, it was published earlier (version 2.0 was published to mvnrepository in 2006 2 years before shade v1.0) and shade came later with a much more complicated problem to solve.
Ok now, the final use case, the so called uber or shaded jar (have seen it referenced as both). this is for situations where you REALLY need an EXACT version of a library packaged with your library and know that other people may have a different version of the same library getting called.
In my case, I made it here from looking into using kotlin (and the kotlin std lib) with a minecraft server plugin and was curious why I needed to use shade for building. In that scenario, there will be a buuuuunch of other plugins all running similar versions of the kotlin std lib, but every one a diff version AND there is no dependency management like maven, my jar needs to ship pre-packed to drop and go.
This is where a so called uber-jar is needed that both needs to have all the dependencies packed in but also needs to not conflict with other versions, even the assembly plugin says it cannot effectively manage this requirement:
If your project wants to package your artifact in an uber-jar, the assembly plugin provides only basic support. For more control, use the Maven Shade Plugin.
This is the entire purpose of the shade plugin, it creates a jar with all the dependency classes copied in BUT it also renames all those classes and refactors your code to match so that your classes will not conflict with any other classes in other libraries being imported elsewhere by the code. This is the only purpose of the shade plugin and is incredibly important, you could still use the assembly plugin after even to pack it all up in a zip or other things, but refactoring like that is not assembly's purpose.
Have you tried this?
import myImage from './assets/myImage.png';
<img src={myImage} alt="description" />
You can try the src folder as well
Your best options would be placing the image in the components folder!
as mentioned in a previous answer
here's a quick summary of the previous answer:
Adding images in the public folder and the src folder both are acceptable methods, however, importing images into your component has the benefit that your assets will be handled by the build system, and will receive hashes which improves caching/performance. You'll also have the added benefit that it will throw an error if the file is moved/renamed/deleted.
I had the same issue on 2025/05/09 when I changed from short token to long token. It shows me this error, and I have no idea how to resolve it. Does anyone have a solution?
button[aria-label^="Follow"]
You can also use $= for the ending instead or *= for substring matches
All of it boils down to as much as I hate to swear pure shit because now it shows in the code that you can change the timeframe of when things were posted to everything in the Micro stacks . You guys have one ahead and literally reformat a time on how things are posted that’s cheating I don’t care about what means or what you try to write into the code if you’re doing things at a later tense, and then dating to. To not have happened yet so that It can work out for you when that time does come you’re cheating…AND YOU TELL THAT WHICH FROM WITCH TO WITCH TYPE OF PIKE DESERVING …THAT I don’t care WHICH FUTURE PASSED NOSTALGIA THROUGHOUT THE WHOLE META VERSE COMES FROM. Once it gets boiled down, you’re all cheating. Now trust me. I very well know how this sounds to the average person so yeah you can make it like some crazy person wrote this, but I’ll take you back through and show you where I got this information from not that it matters probably because you’ll just reformat that also, but that in itself will prove..
Check for the image size of your image, firefox allows the size of 1-2MB. Secondly, it can occur due to security problem. Firefox may block base64 encoded images...so you can check console for csp errors or disable extensions to test Csp error may look like: Content-Security-Policy: img-src 'self'; If it is so then you can fix it by adding- data: ie, Content-Security-Policy: img-src 'self' data:;
Not 100% sure, but personally I like the Prettier extension which just has great support for all languages.
it will work using below tx
iif(isNull($df_var_last_update_dt),fromUTC(currentUTC(),'UTC-4'),toTimestamp($df_var_last_update_dt,'yyyy-MM-dd\'T\'HH:mm:ss.SSS'))
"df_var_last_update_dt": "2025-02-10T23:22:08.657Z"
just don't unless your making minecraft
Okay, so I found a solution that works. Apparently I can't use Open to open excel in excel. I have to use workbooks.open. This solution simply opens the source file, copies the relevant sheet into the current document and closes the sourcefile. problem solved. (The help document that it is based on is here, but it is a bit incorrect and needed playing with: https://learn.microsoft.com/en-us/office/vba/api/excel.workbooks.open)
Sub ImportWorksheet()
Dim sourcefile, sourceloc, sourcetab As String
sourcefile = "space flight.xlsx"
sourceloc = "C:\Users\moish\OneDrive\Documents\space flight.xlsx"
sourcetab = "Bodies"
Destination = ActiveWorkbook.Name
Workbooks.Open sourceloc
ActiveSheet.Name = tabname
Sheets(sourcetab).Copy Before:=Workbooks(Destination).Sheets(1)
Windows(sourcefile).Activate
ActiveWorkbook.Close SaveChanges:=False
Windows(Destination).Activate
End Sub
On top of your android app having RECORD_AUDIO
permission, check if your Google / Google Speech Recognition & Synthesis app in the settings of your android device have both microphone and notification permissions enabled. I was missing notification permissions for Google Speech Recognition & Synthesis app that was providing SpeechRecognizer
and kept getting ERROR_INSUFFICIENT_PERMISSIONS
There could be several cases that end up seeing ERR_EMPTY_RESPONSE
. In my case, it was due to using http
instead of https
. Please check this link for some other cases like:
The port is blocked or busy
A proxy extension installed on your browser
A running VPN
Since you have no logic or outputs, your design is being optimized down to nothing. You can run the get_ports
command by itself to see a list of all the ports that remain.
The question, and all of the answers so far, are repeating the error message's mistake to not say which service account they are talking about. There are typically three service accounts:
There is the account which the Cloud Run Service will use at run time (when invoked). This error is not about it.
The error is for the Cloud build account which needs roles/run.builder. That's probably the Compute Engine default service account.
The deployment, eg. the gcloud
command, has to be logged in as someone. That may also be a service account. Whoever runs the command needs roles/run.developer
or roles/run.sourceDeveloper
The problem here is that too much of the documentation and some of the error messages don't say which account. Granting the correct permission to the wrong account doesn't help, which is probably the source of the comments saying "it doesn't work for me"
solution found, I had to move the line setting the window visible after initiating gamePanel
You might find this video helpful.
The associated GitHub project is - https://github.com/mamonaco1973/container-intro/tree/main/.github/workflows
We walk through configuring GitHub actions for AWS, Azure and GCP.
I was able to resolve the 401 error, turned out to be a basic mistake from my side of providing the incorrect credentials. Closing issue
pbpaste with double quotes or `pbpaste | sed 's/ /\\\ /g'z` used to work perfectly on my old MBP2012 with paths copied with Cmd+Shift+C, but the same approach doesn't work anymore on a new M4 Mac mini. So, say open `pbpaste` for a path like /Users/yyy/Documents/My File 1.pdf throughs an error "The files /Users/yyy/'/Users/yyy/Documents/My, /Users/yyy/Documents/File, /Users/yyy/Documents/1.pdf' do not exist." Adding quotes open "`pbpaste`" yields "The files /Users/yyy/'/Users/yyy/Documents/My File 1.pdf' do not exist." Using sed doesn't change the situation at all. I tried in Mac's native Terminal, iTerm and Kitty. All same. Clueless. Lost. Need your help.
It's 9 years later and I'm disappointed the "Accepted Answer" on this is completely wrong. I also can't downvote or comment on anything because of the broken reputation system on here too. The issue with
tennis_window.title('Top Ten Tennis')
Has nothing to do with the error OP is asking about as I'm getting the exact same error without ever trying to set the Title or a string anywhere. Something about just creating a new Window with TopLevel is causing a weird error where python keeps thinking it is a string object for whatever reason. I'm currently unable to just call something as basic as:
NewWindow = TopLevel()
without getting this exact same error where it says 'str' obj is not callable. I've even used type hinting to explicitly say the type but it always seems to think it is a 'str' obj for whatever reason. It'd be awesome if someone had the real answer for this.
For reference here is the entirety of the code I'm using to get this exact same type error:
def CreateNewWindow():
NewWindow = Toplevel()
That's literally it. I then just call the function from a ttk.Button with command=CreateNewWindow()
You may also fix the issue with updating these packages to the latest version:
@testing-library/jest-dom
@testing-library/react
According to my Review from The answers, practice code and Documentation I discovered that the onMessageReceived()
method is called effectively depending on the type of message whether notification message or data message. All you have commented but the solution that make me to comment is that:
The stack started when tried to send a message the message was received in the foreground app but not in the background. This is not true in the onMessageReceived()
method when the message is either Data or Data plus notification.
In the on message was trying to send the data to the server but the server don't get the data when on background or killed while after Logging the onMessageReceived()
run effectively. The problem was the way was handling the logic of data sending.
I tried to start a service to do this until finally after making the service foreground the data was received to server whether the app is killed, or foreground. You need to perform short job in the method or will not be run, schedule a job and run with worker or use foreground service otherwise.
I have the exact same problem, currently running pytest 8.3.5 from vscode, in my case I tried a couple of thing, including changing pytest.ini
to
[pytest]
addopts =
noops = true #it didn't work
Another attempt was to find run_pytest_script.py
inside your .vscode-server/extensions.. and replace
#arg_array = ["-p", "vscode_pytest", *args, *ids]
print("Running pytest with args: " + str(arg_array))
pytest.main([]) #removed arg_array from params
That breaks to vscode testing for some bizarre issue I couldn't find. Original code here
What ended up works was a change in my code since I use pedantic:
ExeConfig.config = SettingsConfigDict(
case_sensitive=False,
cli_parse_args=True,
cli_ignore_unknown_args=True, # this should ignore the --rootdir ...
And elect to Evaluate 'on change of group' and select the JOB Group (assuming the report is Grouped on JOB). Set the RESET level to the desired logic.
The correct answer for .NET 6.0 and later is CULong, and CLong for its signed counterpart.
Those two types correctly handle the differences in size of C's long
type between Windows/non-Windows and 32-bit/64-bit targets.
See https://github.com/dotnet/runtime/issues/13788 for more context.
Found the answer - it is here:
I have this exact issue:
The email fails when power automate initially runs, but if I re-test the same exact run that failed from Power Automate it will process everything properly and the files gets to Azure DevOps properly. If I change "Body is Base64" to No, then the workflow seems to be successful but when I try to open any attachment in Azure DevOps it's corrupted.
I considered this issue in my current project (https://github.com/uonrobotics/ctmd) and redefined mdarray to use the std::array container when rank_dynamic() is 0.
namespace detail {
template <extents_c extent_t>
[[nodiscard]] inline constexpr size_t static_mdarray_size() noexcept {
if constexpr (extent_t::rank() == 0) {
return 0;
} else {
return []<size_t... Is>(std::index_sequence<Is...>) {
return (extent_t::static_extent(Is) * ...);
}(std::make_index_sequence<extent_t::rank()>{});
}
}
} // namespace detail
template <size_t Rank, class IndexType = size_t>
using dims = dextents<IndexType, Rank>
template <typename T, extents_c extent_t>
using mdarray = std::conditional_t<
extent_t::rank_dynamic() == 0,
std::experimental::mdarray<
T, extent_t, layout_right,
std::array<T, detail::static_mdarray_size<extent_t>()>>,
std::experimental::mdarray<T, extent_t, layout_right, std::vector<T>>>;
With this definition, the mdarray can be instantiated as a constexpr when the rank is static, for example:
constexpr auto a = mdarray<T, extents<size_t, 2, 1, 2>>{std::array<T, 4>{1, 2, 3, 4}};
Alternatively, it can be created as a non-constexpr object with dynamic extents:
const auto a = mdarray<T, dims<3>>{std::vector<T>{1, 2, 3, 4}, dims<3>{2, 1, 2}};
Is this approach helpful for your problem?
There are many ways to do this. What you really need is a webserver to serve your content, in a way similar to production. And you have a lot of options: nginx, apache, etc.
But for something simpler you can even use a built-in php server.
Example: php -S localhost:8000 -t public (run that in the project server)
Before you do that though, there are some important steps:
- npm run build
Set these in .env:
APP_ENV=production
APP_DEBUG=false
You may also run these to optimize the server:
php artisan config:cache
php artisan route:cache
php artisan view:cache
If performance is a concern you need to avoid the ORM as much as you can, if it's a repetitive operation to look up vendors Id based on the account number, you can write an insert/update trigger on the vendor to get the userId from the Accounts table.
To summarize you can apply this update SQL query once, then write a trigger on both insert and update actions on the vendor table to update it's userId.
Why not the following Dockerfile:
# Build stage
FROM node:alpine AS builder
WORKDIR /usr/src/app
COPY package.json package-lock.json* ./
RUN npm install --save-dev @types/node
COPY . .
RUN npm run build
# Production stage
FROM node:alpine
WORKDIR /usr/src/app
RUN npm install -g serve
COPY --from=builder /usr/src/app/dist ./dist
EXPOSE 80
CMD ["serve", "-s", "dist", "-l", "80"]
For some reason I cannot use nginx solution above. Not sure if it is because I am using nginx as another docker to orchestrate many dockers to work together. I use npm serve function. Not sure what is the downside. Would love to hear any comment from you.
My idea is, you might have not set the Service Account permission on the IAM & Admin, try to locate your IAM & Admin in Google Cloud Console and make sure that the Document AI user has permission for each.
There is also a quota on API usage that shows the 403 Forbidden error, try to check it also at the IAM & Admin then Quotas to check your allocation.
I think your best choice might be to try Laravel Herd, or better still, use Laravel Valet. It might give you the closest to what you can get a prod environment feel.
I have the same problem, as do others. It only seems an issue with a list long enough to nessesitate a fair amount of scrolling and becomes more evident as you scroll up and down a few times while tapping onto the items.
Two ways to run jest without using npm and without global install, but you will need to install npm (and have run npm install
):
npx jest
Or
node node_modules/jest/bin/jest
This is probably because the image by default has a 'display: inline' you can try 'display: inline-block', or you can put in inside a div and rotate and float the div.
According to this page, you might need to include the --allow-unk so that unknown words can appear in the output.
Here is a Scrappy Workaround way, though it might not be as you intended
<table>
<tr>
<td width="500px" align="left">⬅️ Left</td>
<td width="500px" align="right">Right ➡️</td>
</tr>
</table>
If you're looking for a solution to handle large files in a decentralized way, you might want to consider using GraphDB, a distributed graph database that leverages the Origin Private File System (OPFS) for efficient file storage. Unlike GUN.js, which has a 5MB localStorage
limitation, GraphDB is designed for scalable storage and real-time data modeling. You can explore it further and give it a try here: GraphDB. and you can see the documentation here GraphDB Wiki.
With json v2, you can have this struct
type Foo struct {
// Known fields
A int `json:"a"`
B int `json:"b"`
// The type may be a jsontext.Value or map[string]T.
Unknown jsontext.Value `json:",unknown"`
}
Official documentation and example here: https://pkg.go.dev/github.com/go-json-experiment/json#example-package-UnknownMembers
You can now customize the severity of specific events, and setting an event to ResilienceEventSeverity.None
will suppress the logs.
https://www.pollydocs.org/advanced/telemetry.html#customizing-the-severity-of-telemetry-events
services.AddResiliencePipeline("my-strategy", (builder, context) =>
{
var telemetryOptions = new TelemetryOptions(context.GetOptions<TelemetryOptions>());
telemetryOptions.SeverityProvider = args => args.Event.EventName switch
{
// Suppress logging of ExecutionAttempt events
"ExecutionAttempt" => ResilienceEventSeverity.None,
_ => args.Event.Severity
};
builder.ConfigureTelemetry(telemetryOptions);
});
because during a series of redirects the page context becomes invalid, and waitForNavigation can stop on one page of the redirects and / or the neccessary selector is missing and waitForSelector faild - because of : JSHandles can be evaluated!
But in my case I need to find an element on the final page after a series of redirects and That means the page has already finished with redirects!
All the suggestions above fails with an error.
I came to a crude solution - a try-catch loop with pauses (sleeps), which simply works...
Based on comments, i decided to remove as much as i can until the error was gone and eventually the only thing in the cpp file was int main() { return 0:} and i still had the error!
So i deleted the project and created a new editor project, which actually works now.
<----------------Cannot find 'GeneratedPluginRegistrant' in scope---------------->
Remove This -----> GeneratedPluginRegistrant.register(with: self)
What does the Postgres log file say about it? It is refusing the connection, so why?
I think Postgres by default has pretty tight security and as such will only allow known connections.
There is a new set of tools for performing SQL Server migrations at https://github.com/Schema-Smith/SchemaSmithyFree. They are free to use for small businesses and teams under a community license. All code is available in the repository, no black boxes or hidden anything.
The update process, SchemaQuench, takes a state based approach to update tables. You define the final state you want your table to look like and the update process does whatever alter statements are necessary to get you there. SchemaQuench can apply all of your objects, tables, functions, views, etc as well.
There is also a tool for reverse engineering your database(s) into the expected metadata format all of which can be checked into your repository just like any other code.
Documentation can be found at https://github.com/Schema-Smith/SchemaSmithyFree/wiki
When using androidx.appcompat.widget.Toolbar
then execute ToolbarUtils.getActionMenuView(toolbar).setExpandedActionViewsExclusive(false);
after inflating the menu in onCreateOptionsMenu
, this seems to resolve most issues with an expandActionView
call. The workaround use library restricted features, so they may be removed at any time.
I found it, I think
environment:
- ENV_VARIABLE_IN_CONTAINER=$ENV_VARIABLE_ON_HOST
seems to have worked.
Try below one, it will help to create own package/application id
npx @react-native-community/cli init MyTestApp --package-name com.thinesh.mytest --title "My Test App"
How your infrastructure looks like? С program on one computer, browser - on another? Do you have any web-server?
There are a lot of different ways.
I ultimately reached out to AWS support with this same question and they confirmed that it is not possible due to the fact that Google initiates the link between it and Cognito and therefore does not allow you to pass any data into that pre sign up trigger. I understand this is likely by design to not allow any additional data passed in with the google token to not allow anyone to manipulate the sign up process. Fully understandable from Google's perspective.
In terms of what to do, it's a pretty niche use case to require linking two accounts with different emails. Moving forward I will add the user's business email to the newly created social account as an attribute from the session once they are logged in. I will end up with duplicates of some users in Cognito but I will have to live with that. The social login account will be the one with access when the user logs in again. I could even run a lookup on users' attributes to find if someone has a separate account when logging in.
here is the most accurate way to find the maximum average salary:
select max ( avg(salary))
from workers
group by worker_id;
You can apply a group function to a group function retreiving a column of data values, ie max to avg(salary).
You can create a util method to wrap the amount.:
private BigDecimal resolveScientificAmount(BigDecimal amount) {
if (amount == null) {
return null;
}
return new BigDecimal(amount.toPlainString());
}
Check the Anaconda environment you are working on, because it might be installed in another environment, so you can exit the current one you are working on by running: conda deactivate
and then check again if you are able to import the module.
English
You're encountering this issue because you're trying to load a .pyd
(native extension module) using a SourceLoader
, which is intended for .py
files (text). Native extensions like .pyd
must be loaded using ExtensionFileLoader
— otherwise, you’ll get DLL or encoding-related errors.
Assuming you dynamically created this structure:
/a/temp/dir/foo/
__init__.py
bar.pyd
You can correctly import the package and the .pyd module like this:
import sys
import importlib.util
import importlib.machinery
import os
def import_pyd_module(package_name, module_name, path_to_pyd, path_to_init):
# Ensure package is imported first
if package_name not in sys.modules:
spec_pkg = importlib.util.spec_from_file_location(
package_name,
path_to_init,
submodule_search_locations=[os.path.dirname(path_to_init)]
)
module_pkg = importlib.util.module_from_spec(spec_pkg)
sys.modules[package_name] = module_pkg
spec_pkg.loader.exec_module(module_pkg)
# Load the compiled submodule
fullname = f"{package_name}.{module_name}"
loader = importlib.machinery.ExtensionFileLoader(fullname, path_to_pyd)
spec = importlib.util.spec_from_file_location(fullname, path_to_pyd, loader=loader)
module = importlib.util.module_from_spec(spec)
sys.modules[fullname] = module
spec.loader.exec_module(module)
return module
Your MyLoader
inherits from SourceLoader
, which expects a .py
file and calls get_data()
expecting text. This cannot be used for .pyd
files, which are binary and must be handled using the built-in ExtensionFileLoader
.
If you want a custom import system with dynamic .pyd
loading, you can still use a MetaPathFinder
, but your loader must delegate to ExtensionFileLoader
.
Usage:
mod = import_pyd_module(
package_name="foo",
module_name="bar",
path_to_pyd="/a/temp/dir/foo/bar.pyd",
path_to_init="/a/temp/dir/foo/__init__.py"
)
mod.some_method()
Spanish
Estás teniendo este problema porque intentas cargar un módulo compilado .pyd
usando un SourceLoader
, que está diseñado para archivos .py
(texto fuente). Los archivos .pyd
(extensiones nativas en Windows) deben cargarse usando ExtensionFileLoader
, o de lo contrario recibirás errores como el de carga de DLL o problemas de codificación.
Supongamos que creaste dinámicamente la siguiente estructura:
/a/temp/dir/foo/
__init__.py
bar.pyd
Puedes importar correctamente el paquete y el módulo .pyd
con el siguiente código:
import sys
import importlib.util
import importlib.machinery
import os
def importar_pyd_como_modulo(nombre_paquete, nombre_modulo, ruta_pyd, ruta_init):
# Asegurar que el paquete esté importado primero
if nombre_paquete not in sys.modules:
spec_pkg = importlib.util.spec_from_file_location(
nombre_paquete,
ruta_init,
submodule_search_locations=[os.path.dirname(ruta_init)]
)
modulo_pkg = importlib.util.module_from_spec(spec_pkg)
sys.modules[nombre_paquete] = modulo_pkg
spec_pkg.loader.exec_module(modulo_pkg)
# Cargar el submódulo compilado
nombre_completo = f"{nombre_paquete}.{nombre_modulo}"
loader = importlib.machinery.ExtensionFileLoader(nombre_completo, ruta_pyd)
spec = importlib.util.spec_from_file_location(nombre_completo, ruta_pyd, loader=loader)
modulo = importlib.util.module_from_spec(spec)
sys.modules[nombre_completo] = modulo
spec.loader.exec_module(modulo)
return modulo
MyLoader
Tu clase MyLoader
hereda de SourceLoader
, lo cual no sirve para archivos .pyd
, porque espera archivos fuente .py
y llama a get_data()
esperando texto plano. Sin embargo, .pyd
es binario, y debe manejarse con el loader nativo de extensiones: ExtensionFileLoader
.
Si deseas implementar un sistema de carga personalizado, puedes seguir usando MetaPathFinder
, pero el loader debe delegar a ExtensionFileLoader
.
I am using the AWS CDK for .Net today and experienced the same issue as described above. I always got an error (could not find Policy node) when attempting to the the dependency like in Hayden's example.
I was able to resolve the issue by updating the Amazon.CDK.Lib nuget package from 2.191.0 to 2.195.0
Check out Defang.io - single command (defang compose up) to deploy your Docker Compose project to your account on AWS / GCP / DigitalOcean. Supports networking, compute, storage, LLMs, etc. Even integrated into IDEs such as VS Code, Cursor, Windsurf via their MCP Server so you can deploy straight from the IDE.
You can also grep everything before 'list:' and pipe to grep for email:
grep -B 1000000 "list:" example.txt | grep "[email protected]"