DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': '*', 'USER': '*', 'PASSWORD': '*', 'HOST': 'localhost', 'PORT': '3306', 'CONN_MAX_AGE': 0, # add this 'OPTIONS': { 'charset': 'utf8mb4', 'connect_timeout': 60, 'init_command': "SET sql_mode='STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'" } } } //This configuration has been working fine for me so far.
Facing same issue, Did you find any solution? I have Python FastAPI application in that I have used .env and I know it's not recommended or best practise to push .env file. If you the solution can you guide me?
Another potential fix for people hitting this on work laptops is if your company uses Trend Vision One is to disable the Trend Micro LightWight Filter Driver on your network adaptors.
Please check image and confirm the X, Y, Z coordinates are correct.
actually for
v-navigation-drawer when you set the max-height it becomes automatically
scrollable , here is an example
<v-navigation-drawer
v-model="historyPreviewDrawer"
temporary
location="bottom"
style="max-height: 50%"
class="custom-drawer"
>
I think people already know the answer, but for the newbie...
It is needed to set USART3's TX & rX at PD8 and PD9. As a default, USART3's TX and RX would be PB10 and PB11. So we need to change ports and pins manually.
For more information, you can find schematics at CAD resource page in ST.com: https://www.st.com/en/evaluation-tools/nucleo-f767zi.html#cad-resources
Solved! the problem was the src path missed.
I was able to resolve this issue by adding jaxb-2.2 and wmqJmsClient-2.0 features and removing wasJmsClient-2.0 and wasJmsServer-1.0.
I also had to add the following to server.xml:
<keyStore id="myTrustStore" location="/opt/ibm/wlp/usr/servers/defaultServer/resources/security/b00-truststore.jks" type="JKS" password="" />
<ssl id="defaultSSLConfig" trustStoreRef="myTrustStore"/>
You can try to delete or modify this configuration , gradle.properties:
org.gradle.configuration-cache=false
org.gradle.unsafe.configuration-cache=false
Note: If your file uses React Hooks, you can't directly use async/await in that component, as React Hooks require a "use client" directive, whereas async functions are treated as server-side.
If you run into this conflict, a good approach is to nest a client component inside a server component. The server component can handle the data fetching using async/await, and then pass the retrieved values as props to the client component, which can safely use React Hooks.
I managed to find a solution. I noticed that in my data, just before the NAs, the values increase much more slowly, so the algorithm interprets that as a downward parabola. So I removed 3 values before and after each block of NAs, and I'm getting good results. It won't work in every case, but for me, it's working fine.
inertie2sens<- function(data_set,energie){
for (i in 2:nrow(data_set)) {
if (is.na(data_set[i, energie])& !is.na(data_set[i+1, energie])) {
data_set[i+1, energie] =-1
}
}
for (i in nrow(data_set):2) {
if (is.na(data_set[i, energie])& !is.na(data_set[i-1, energie])) {
data_set[i-1, energie] =-1
}
}
for (i in 2:nrow(data_set)) {
if (data_set[i, energie]==-1|is.na(data_set[i, energie])) {
data_set[i, energie] <- NA
}
}
return(data_set)
}
I have the same issue and after research i didn't find any way to do this. The content shadow-root of the autocomplete element is by default set to close so we can't access to input to change placeholder.
I don't know if the path has changed since the other answers were posted or if my answer is specific to Windows Server 2016, but I found the logs in C:\Windows\System32\config\systemprofile\AppData\Local\Temp\Amazon\EC2-Windows\Launch\InvokeUserData
under InvokeUserDataOutput.log
and InvokeUserDataError.log
This seems like there is a problem with your driver set up. Can you please share capabilities?
I tried to include different options into the exams2pdf / exams2nops command, but nothing worked for me...
height = 50, width = 50
height = 5, width = 5
height = 0.5, width = 0.5
fig.height = 0.5, fig.width = 0.5
out.width = 0.5
am I using the wrong numbers, or what am I doing wrong? I only have pictures that I generated within R:
```{r Elasticita, echo=FALSE, fig.height = 5, fig.width = 5, fig.path = "", fig.cap = ""}
...
...
...
and I also tried to change the size there, but it is then overwritten I think by the exams2nops command.
What I did not yet try is modifying the template.
Am I making a mistake with the options in the exams2nops command?
Thank you already!
This is a solid and scalable solution, using global.setup.ts
ensures consistent fixture data across retries and isolates setup from test logic. It also avoids the pitfalls of module-level variable re-initialization. Great approach for maintaining test reliability in state-dependent scenarios!
is due to how event handling and focus work in QtQuick when a MouseArea is placed inside the background of a TextField
Advisable to have Apple Sign-in only happen on Apple devices/ iOS devices. Don't do Apple signin on Android. nevertheless.
If authentication is successful, You can consider setting up Deeplinks for your app. Such that redirects from Chrome to your web url will launch the mobile app and perform the required operations.
On Android Instead of opening up Chrome to perform the Sign-in operation. Consider opening the url in a dialog or new page that is a Webview. that way you can easily manage the redirects from within the Webview. Launching Chrome to perform an action and then redirect back to an App, is kind of an iOS behaviour.
Currently there is no API or tweak that will do what you are requesting. You can however request the feature via the idea station at https://forums.autodesk.com/t5/acc-ideas/idb-p/acc-ideas-en
What I can say is your card container will have fixed height and overflow: hidden, so when graphs appear, they overflow upward and get clipped.
So possible fix is remove fixed height
from card component and remove overflow: hidden
If still issue not resolved then share your code block so I can help you out exactly.
I finaly solved the issue by uninstalling the langchain package and reinstalling it (only this package), even if it was looking up-to-date (the rest was up-to-date as well)
Thanks for your interensting question. Do you run your tests against localhost?
Otherwise if you run your app in Payara Micro, you can even run Arquillian against Payara Embedded by the help of Payara Server Embedded Arquillian Container Adapter - it's the most simple way of getting Arquillian work with Payara. Watch https://hantsy.github.io/ for a comparision between the Payara Arquillian adapters and their configuration - watch the simple embedded configuration.
There is an Open Github Issue with the Payara Embedded Adapter regarding Java's module system Jigsaw and slow shutdown of the Arquillian Deployment. Workarounds are listet there.
I migrate old Java EE apps with global installation application servers to Jakarata EE Payara Micro apps which leads to having a simple bootRun analogue with IDE integration:
build.gradle
plugins {
...
id 'fish.payara.micro-gradle-plugin' version '...'
...
}
java {
toolchain {
languageVersion = JavaLanguageVersion.of(17)
}
}
test {
...
jvmArgs = [
'--add-opens', 'java.base/sun.net.www.protocol.jar=ALL-UNNAMED',
...
]
}
payaraMicro {
payaraVersion = '...'
...
}
dependencies {
...
testImplementation("org.jboss.arquillian.junit5:arquillian-junit5-container:1.9.4.Final")
testImplementation("fish.payara.arquillian:arquillian-payara-server-embedded:3.1")
testRuntimeOnly("fish.payara.extras:payara-embedded-all:6.2025.4")
...
}
arquillian.xml
<?xml version="1.0"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/schema/arquillian"
xsi:schemaLocation="http://jboss.org/schema/arquillian
http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
<container qualifier="payara-embedded" default="true">
<configuration>
<property name="resourcesXml">src/test/resources/glassfish-resources.xml</property>
</configuration>
</container>
</arquillian>
glassfish-resources.xml
<!DOCTYPE resources PUBLIC
"-//GlassFish.org//DTD GlassFish Application Server 3.1
Resource Definitions//EN"
"http://glassfish.org/dtds/glassfish-resources_1_5.dtd">
<resources>
// TODO datasource definition
</resources>
I only just realised the HabitStreakManager
was the only collection that was double-embedded AND not within the same file as it's parent collection (HabitTracker
).
So, to fix it, I made the file containing HabitStreakManager
a part of the file containing its parent collection, HabitTracker
.
const {providers: timeagoProviders = [] } = TimeagoModule.forChild()
And then insert into your standalone component, providers array
Alternatively, instead of mvn jetty:run
, use this command without the need to set MAVEN_OPTS
:
mvnDebug jetty:run
Here's the reference.
In my case, I am using VS Code. When I run the command, the terminal would only show:
Preparing to execute Maven in debug mode
Listening for transport dt_socket at address: 8000
and wait until I click the Start Debugging button (after adding and the corresponding configuration in launch.json
and selecting it)
Below is the configuration I used, just in case.
{
"type": "java",
"name": "Attach to Remote",
"request": "attach",
"hostName": "localhost",
"port": 8000,
"projectName": "your-project-name" // Optional: Replace with your project name
}
Alright, thanks to @G.M. i could come up with an answer, if anyone is interested i will share it.
It resumes the steps from the document he shared on GNU GCC freestanding environnements
:
main.c:
#include "app.h"
#include <gcov.h>
#include <stdio.h>
#include <stdlib.h>
extern const struct gcov_info *const __gcov_info_start[];
extern const struct gcov_info *const __gcov_info_end[];
static void dump(const void *d, unsigned n, void *arg) {
(void)arg;
fwrite(d, 1, n, stderr);
}
static void filename(const char *f, void *arg) {
__gcov_filename_to_gcfn(f, dump, arg);
}
static void *allocate(unsigned length, void *arg) {
(void)arg;
return malloc(length);
}
static void dump_gcov_info(void) {
const struct gcov_info *const *info = __gcov_info_start;
const struct gcov_info *const *end = __gcov_info_end;
__asm__ ("" : "+r" (info));
while (info != end) {
void *arg = NULL;
__gcov_info_to_gcda(*info, filename, dump, allocate, arg);
++info;
}
}
int main(void) {
application();
dump_gcov_info();
return 0;
}
app.c:
#include "app.h"
#include <stdio.h>
void application(void) {
int x = 1;
if (x == 1) {
printf("Works\n");
}
if (x == 2) {
printf("Doesn't work\n");
}
}
The app.h
file is empty, just the application()
function prototype.
--coverage -fprofile-info-section
:gcc --coverage -fprofile-info-section -c app.c
gcc --coverage -fprofile-info-section -c main.c
ld --verbose | sed '1,/^===/d' | sed '/^===/d' > linkcmds
.rodata1
referenced or more) and add the following below it. This will indicate to the linker that tere is a special place in memory reserved for .gcov_info
: .gcov_info :
{
PROVIDE (__gcov_info_start = .);
KEEP (*(.gcov_info))
PROVIDE (__gcov_info_end = .);
}
gcc --coverage main.o app.o -T linkcmds # This will output an executable file "a.out"
./a.out 2>gcda.txt
stderr
to a file because that's where all the gcov info is dumpedstatic void dump(const void *d, unsigned n, void *arg) {
(void)arg;
fwrite(d, 1, n, stderr);
}
gcov-tool merge-stream gcda.txt
gcov -bc app.c
-> File 'app.c'
Lines executed:85.71% of 7
Branches executed:100.00% of 4
Taken at least once:50.00% of 4
Calls executed:50.00% of 2
Creating 'app.c.gcov'
Lines executed:85.71% of 7
When you make site its always static and you can't make it dynamic. You can make it only with server beside your frontend. I mean you need to make server that will response on users requests. You can't make it only with your frontend part, It's unbeliavable.
When you make your server-side part it's always dynamic cause you everytime you need to have response on your request. You can't do rate limiting only with Frontend.
Laravel's built-in throttle middleware.
Cloudflare rate limiting rules.
Node.js proxy (if you need advanced control).
yalla
sdfsfsdfsdfsdfsdfsfsdfssfsdfsdfsfsdfsd
The API official documentation doesn't list an "enableChat" property, so no surprise it doesn't do anything.
As far as I can tell, there's no way to enable/disable the chat on a given broadcast through the API.
I finally rebuild it with a different working example using the import-method and changed the way the position is added, without the Geocoder.
Also, in case it might be useful for someone: The map was so terribly slow because it used FontAwesome-Icons, which resulted in strange JS-errors (while being displayed correctly) - as soon as I replaced them with static SVG, it was fine.
One thing though that is being ignored without an error: The MarkerClusterer-Options don't work (minimumClusterSize: 10, maxZoom: 15) - any ideas how to do this correctly, or is it just broken?
<div id="map"></div>
<script>(g=>{var h,a,k,p="The Google Maps JavaScript API",c="google",l="importLibrary",q="__ib__",m=document,b=window;b=b[c]||(b[c]={});var d=b.maps||(b.maps={}),r=new Set,e=new URLSearchParams,u=()=>h||(h=new Promise(async(f,n)=>{await (a=m.createElement("script"));e.set("libraries",[...r]+"");for(k in g)e.set(k.replace(/[A-Z]/g,t=>"_"+t[0].toLowerCase()),g[k]);e.set("callback",c+".maps."+q);a.src=`https://maps.${c}apis.com/maps/api/js?`+e;d[q]=f;a.onerror=()=>h=n(Error(p+" could not load."));a.nonce=m.querySelector("script[nonce]")?.nonce||"";m.head.append(a)}));d[l]?console.warn(p+" only loads once. Ignoring:",g):d[l]=(f,...n)=>r.add(f)&&u().then(()=>d[l](f,...n))})
({key: "", v: "weekly"});</script>
<script type="module">
import { MarkerClusterer } from "https://cdn.skypack.dev/@googlemaps/[email protected]";
async function initMap() {
const { Map } = await google.maps.importLibrary("maps");
const { AdvancedMarkerElement } = await google.maps.importLibrary("marker");
const center = { lat: 50.5459719, lng: 10.0703129 };
const map = new Map(document.getElementById("map"), {
zoom: 6.6,
center,
mapId: "4504f8b37365c3d0",
});
const labels = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const markers = properties.map((property, i) => {
const label = labels[i % labels.length];
const marker = new google.maps.marker.AdvancedMarkerElement({
position: new google.maps.LatLng(property.lat,property.lng),
content: buildContent(property),
title: property.name,
});
marker.addListener("gmp-click", () => {
toggleHighlight(marker, property);
});
return marker;
});
const markerCluster = new MarkerClusterer({ markers:markers, map:map, options:{minimumClusterSize: 10, maxZoom: 15} });
}
In my case, this issue occurred in the production environment, but it was resolved by simply changing the API URL from HTTP to HTTPS. The development environment works fine with HTTP.
People can easily get around frontend rate limitingâeither by disabling JavaScript, editing code, or directly hitting the API with tools. Even if your frontend tries to stop abuse, itâs not safe to rely on it alone. Backend rate limiting is much harder to bypass and helps protect your server from getting overloaded. Itâs a necessary extra layer of defense that frontend code just canât provide.
I'll join in because I also have a similar problem. I added a button to the app that changes the icon. When I click it, the app closes, the icon changes and theoretically there's no problem. However: shortcuts stop working, it doesn't start automatically after pressing debug. You have to manually start from the shortcut, and the errors I get are:
My main class: MainActivity
My alias name: MainActivityDefault
In the folder containing the main class I also have an empty class as in the shortcut name
When starting debug:
Activity class {com.myproject.myapp/com.myproject.myapp.MainActivityDefault} does not exist
When starting the shortcut:
Unable to launch. tag=WorkspaceItemInfo(id=-1 type=DEEPSHORTCUT container=# com.android.launcher3.logger.LauncherAtom$ContainerInfo@1a1bf6a targetComponent=ComponentInfo{com.myproject.myapp/com.myproject.myapp.MainActivityDefault} screen=-1 cell(-1,-1) span(1,1) minSpan(1,1) rank=0 user=UserHandle{0} title=PokaĆŒ na mapie) intent=Intent { act=android.intent.action.MAIN cat=[com.android.launcher3.DEEP_SHORTCUT] flg=0x10200000 pkg=com.myproject.myapp cmp=com.myproject.myapp/.MainActivityDefault bnds=[359,640][1115,836] (has extras) }
android.content.ActivityNotFoundException: Shortcut could not be started
at android.content.pm.LauncherApps.startShortcut(LauncherApps.java:1556)
at android.content.pm.LauncherApps.startShortcut(LauncherApps.java:1521)
at com.android.launcher3.BaseActivity.startShortcut(SourceFile:1)
at com.android.launcher3.BaseDraggingActivity.startShortcutIntentSafely(SourceFile:8)
at com.android.launcher3.BaseDraggingActivity.startActivitySafely(SourceFile:9)
at com.android.launcher3.Launcher.startActivitySafely(SourceFile:6)
at com.android.launcher3.uioverrides.QuickstepLauncher.startActivitySafely(SourceFile:2)
at com.android.launcher3.touch.ItemClickHandler.startAppShortcutOrInfoActivity(SourceFile:14)
at com.android.launcher3.touch.ItemClickHandler.onClickAppShortcut(SourceFile:8)
at com.android.launcher3.touch.ItemClickHandler.onClick(SourceFile:6)
at com.android.launcher3.touch.ItemClickHandler.b(Unknown Source:0)
at O0.f.onClick(Unknown Source:0)
at com.android.launcher3.popup.PopupContainerWithArrow.lambda$getItemClickListener$0(SourceFile:1)
at com.android.launcher3.popup.PopupContainerWithArrow.d(Unknown Source:0)
at F0.e.onClick(Unknown Source:2)
at android.view.View.performClick(View.java:7441)
at com.android.launcher3.shortcuts.DeepShortcutTextView.performClick(SourceFile:3)
at android.view.View.performClickInternal(View.java:7418)
at android.view.View.access$3700(View.java:835)
at android.view.View$PerformClick.run(View.java:28676)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:7839)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003)
We have already partially answered this question, but I will duplicate it. Thanks ĐŃH!
Excerpt from the documentation:
If signUp() is called for an existing confirmed user:
When both Confirm email and Confirm phone (even when phone provider is disabled) are enabled in your project, an obfuscated/fake user object is returned.
When either Confirm email or Confirm phone (even when phone provider is disabled) is disabled, the error message, User already registered
is returned.
This means that if you need to receive the User already registered
error, you should disable email confirmation in the supabase settings.
Thanks to those who tried to help, I got it working just now, after spending an entire day on it.
I'm not sure what exactly the problem was. After reinstalling VS, VS installer, SDKs and runtimes, including clearing dotnet references out of system environment vars, I was receiving an error on trying to launch a third-party program that also requires the SDK in question.
At that point I repaired the installation of V9 (which I had tried previously, before clearing out the sys enviro vars and reinstalling VS), and then everything came good.
Same thing for us, I think it's a problem with their servers
Magento 1.9 doesnât support applying the higher discount (catalog vs. promo) by default. Youâll need custom code or a third-party extension to compare both and apply the higher one automatically. Best practice is to avoid catalog price rules and use shopping cart rules for more control.
I got a new windows and struggled with this in RStudio for some days and finally solved it by unboxing this option:
Tools > Global Options > Git/SVN > Sign Git commits
I am not sure whether I accidentally ticked it or it was there in the first place in the new version of it.
The system cannot find the path specified.
C:\Users\Admin\Desktop\react 1> npm start
npm error code ENOENT
npm error syscall open
npm error path C:\Users\Admin\Desktop\react 1\package.json
npm error errno -4058
npm error enoent Could not read package.json: Error: ENOENT: no such file or directory, open 'C:\Users\Admin\Desktop\react 1\package.json'
npm error enoent This is related to npm not being able to find a file.
npm error enoent
npm error A complete log of this run can be found in: C:\Users\Admin\AppData\Local\npm-cache\_logs\2025-06-11T07_46_02_229Z-debug-0.log
I fixed the problem just logging out from docker desktop and logging in from terminal, following the instructions to verify my device.
Add Proxies to your request using Residential proxies from a provider of your choice and you won't get flagged.
Hello Azure LoadBalancer expert
Six months ago, I installed a VM in Azure running an MSSQL server. The VM is located in a VNET and does not have a public IP. To access the SQL Server via the Internet, I first installed an external Azure Load Balancer and set up a NAT rule that forwards traffic from the public IP of the LB via port 3378 to port 1433 of the VM. In the NSG, I enabled port 1433 in the VNET (they are all open anyway) and allowed port 3378 to the internal IP of the VM from the Internet.
The port 1433 on VM is open and a connection from another vm in same vnet can be established.
This worked, but then suddenly it stopped working. I probably changed something and can't find the error.
For me looks my setup same like post Azure Load Balancer Inbound NAT rule targeting VM. The only diff is, I do have just one machine in backend pool.
Has anyone an idea, how to solve the issue?
Best Tino
@RajdeepPal You can use python uroman
package (github
).
import uroman as ur
uroman = ur.Uroman()
print(uroman.romanize_string('à€
à€à€€à€żà€ź à€Čà€à„à€·à„à€Ż à€à„à€Żà€Ÿ à€čà„'))
output: amtim lakssya kyaa hai
Without a full reproducer, it's hard to tell, as there is nothing in the config that could cause this issue. There might by some bug in the .Net wrapper, make sure you are using the newest version.
Some solution could be branch name + commit timestamp encoded as ie base36 (the sequence) + (short) commit hash
Follow the steps mentioned in the project-lombok
https://projectlombok.org/setup/maven
This worked for me.
Download Chrome 64 bit
curl -L -o chrome_installer.exe https://dl.google.com/dl/chrome/install/googlechromestandaloneenterprise64.msi
install chrome
msiexec /i chrome_installer.exe /qn /norestart
I have reproduced the problem. I got a 403 response when I didn't send the UserInfo JSON data to the /save endpoint. When I send the JSON data correctly, I receive a 200 response. I've used the securityFilterChain like yours.
Can you share your UserInfo class? I think there's missing a setter method or appropriate constructor.
Envoy Proxy documentation said max_request_bytes is uint32 value. Does that mean I can't upload files that are larger than 2^32 bytes ?
If it can help ;)
thanks to Youtube : OverSeas Media
{# languagesForm.html.twig #}
{% from 'macros.html.twig' import languageFormMacro %}
<div class="languages-container form-control">
<h3>{{ form_label(languagesForm) }}</h3>
<button type="button" class="add-item-link btn btn-success" data-collection-holder-class="languages">
<i class="fa-solid fa-plus"></i>
</button>
<ul class="languages collection"
data-index="{{ languagesForm|length > 0 ? languagesForm|last.vars.name + 1 : 0 }}"
data-prototype="{{ languageFormMacro(languagesForm.vars.prototype)|e('html_attr') }}"
>
{% for languageForm in languagesForm %}
<li>
<div class="form-control">
{{ languageFormMacro(languageForm) }}
</div>
</li>
{% endfor %}
</ul>
</div>
{# macros.html.twig #}
{% macro languageFormMacro(languageForm) %}
<div class="row">
{{ form_row(languageForm.name) }}
{{ form_row(languageForm.expertise) }}
</div>
{% endmacro %}
Yes, thereâs a way to work around Codex-1âs lack of native Dart and Flutter support by writing a setup script that installs the necessary tools before running your commands. Here's a shell script thatâs been shared by developers facing the same issue:
#!/bin/bash
set -ex
# Install Flutter SDK
FLUTTER_SDK_INSTALL_DIR="$HOME/flutter"
git clone https://github.com/flutter/flutter.git -b stable "$FLUTTER_SDK_INSTALL_DIR"
# Set up environment variables
export PATH="$FLUTTER_SDK_INSTALL_DIR/bin:$PATH"
echo "export PATH=\"$FLUTTER_SDK_INSTALL_DIR/bin:\$PATH\"" >> ~/.bashrc
# Precache Flutter binaries
flutter precache
# Navigate to your project directory
PROJECT_DIR="/workspace/[your_project_name]"
cd "$PROJECT_DIR"
# Get dependencies and run code generation
flutter pub get
flutter gen-l10n
flutter packages pub run build_runner build --delete-conflicting-outputs
# Run tests
flutter test
Replace [your_project_name]
with your actual project folder. This script installs Flutter, updates the path, fetches dependencies, and runs tests â all in one go.
That said, some users have reported that Codex still struggles to execute test commands even after setup. If thatâs the case, you might consider running tests outside Codex in a CI/CD pipeline or local dev environment and using Codex primarily for code generation and editing.
the %cmdcmdline% approach winds up yielding the same 'shell execute' style path under the special context where a custom Windows filetype (cmd.exe ftype
command) has been associated with an auto execute file extension (assoc
+ %PATHEXT%
) (here's my own project demonstrating that kind setup, but it's not in scope to include all that code here)
what worked for us is a quickly built util "pids.exe" that provides the nested parent process names, and then using that to check for whether explorer.exe was present, for example:
for /f %%v in ('pids.exe -name -level 3') do if \"%%v\"==\"explorer.exe\" timeout /t 10
permalink to this usage in a demonstrable script
pids will dump a typical usage block when no args are provided and there are a few more flags that might come in handy in slightly different situations
i imagine there are other command line tools already out there doing this, i just couldn't find them with an admittedly light amount of searching
calvincac, has a solution been found?
the %cmdcmdline% approach winds up yielding the same 'shell execute' style path under the special context where a custom Windows filetype (cmd.exe ftype command) has been associated with an auto execute file extension (assoc + %PATHEXT%) (here's my own project demonstrating that kind setup, but it's not in scope to include all that code here)
what worked for us is a quickly built util "pids.exe" that provides the nested parent process names, and then using that to check for whether explorer.exe was present, for example:
for /f %%v in ('pids.exe -name -level 3') do if \"%%v\"==\"explorer.exe\" timeout /t 10
permalink to this usage in a demonstrable script
pids will dump a typical usage block when no args are provided and there are a few more flags that might come in handy in slightly different situations
i imagine there are other command line tools already out there doing this, i just couldn't find them with an admittedly light amount of searching
We have similar case, we need history of changes, but after specific period of time (e.g. 6 years), we want to remove or anonymize personal data (according to european General Data Protection Regulation, GDPR). Any ideas?
Same error here! Any idea how to fix it?
SparkQoutes.com â Your Daily Dose of Inspiration đ
â The SparkQoutes Team âš**
Horably great to find people struggling with the same problems. Did you ever find a fix for this?
I'm running a CMS Streaming and when chrome stops to ask if I want to continue it kills my streaming. And yes, Auto Refresh Plus does not offer any option to supress that pop-up.
I have xcode version 16.4
It worked on ios 17.5 but giving error on 18 and above
Handling strings and variables in Batch scripts often presents complex challenges, even in seemingly simple tasks. This is especially true when dealing with special characters like <
, &
, |
, or "
, with double quotes being the key element for optimizing the ingenious solution proposed by George Robinson for a question he raised.
These symbols have specific meanings for CMD (the Windows command interpreter) and, if not handled correctly, can cause unexpected errors or undesired behavior.
This analysis explores a classic string manipulation problem in Batch. Due to the strict constraints imposed by the author, the technical challenge becomes complex and non-trivial, demanding both technical insight and command of CMD scripting.
The author imposed a highly restrictive set of challenges for string manipulation in the Batch environment, making the solution more complex. Understanding these four limitations is crucial to grasping the difficulty of the problem.
The variable myvar
has a predefined value. Its initial definition, shown below, is an immutable aspect of the problem.
SET myvar="aaa<bbb"
This means any solution must account for the double quotes and the <
character present in this initial value â in other words, this line of code cannot be modified.
Creating temporary files to assist in processing is not allowed. This invalidates common techniques, such as:
FOR
CommandThe powerful FOR
command was disallowed, eliminating loops that would facilitate character-by-character string manipulation.
The SETLOCAL EnableDelayedExpansion
command and variables with !
are not allowed, excluding a fundamental tool for advanced variable manipulation in Batch.
Without these restrictions, the solution would be simple. For example, the code below (which uses delayed variable expansion, restricted by the author) would solve the problem directly and concisely:
SETLOCAL EnableDelayedExpansion
SET "xxx=!myvar:~1,-1!"
ECHO !xxx!
With the restrictions he imposed, the author was left with few options, leading him to devise a creative solution using only:
%var:find=replace%
)The solution found by the author himself, which meets all the restrictions, uses two intermediate variables to achieve the goal.
SET xxx=%myvar:<=^^^<%
SET zzz=%xxx:"=%
ECHO %zzz%
Delimiting the assignment with double quotes, as in SET "variable=value"
, allows for safer handling of the value, eliminating the need for a second intermediate variable.
SET "xxx=%myvar:<=^^^<%"
ECHO %xxx:"=%
The key to this optimization lies in how CMD processes the command line. By using SET "variable=value"
, the outer double quotes act as a clear delimiter for the SET
command. CMD then interprets everything inside these double quotes as the argument to be assigned to the variable, ensuring two crucial benefits:
Protection of special characters: Characters like <
, &
, |
, and >
within the value are treated as literals by the SET
command, not as CMD operators during the initial line parsing phase.
Control over quotes: The outer double quotes are automatically removed by the SET
command from the content assigned to the variable.
In contrast, when the parameter is not delimited with double quotes (SET variable=unquoted_value
), CMD parses the entire content before passing it to the SET
command. If the value contains double quotes or other unescaped special characters, CMD may interpret them as part of command syntax or redirection operators, leading to errors or unintended retention of double quotes in the final variable value.
This difference makes Batch scripts more robust and predictable, especially when dealing with strings and special characters.
Handling strings containing special characters in Batch scripts demands a deep understanding of the Windows command interpreterâs behavior, particularly with symbols like <
, &
, |
, and "
. The approach presented clearly demonstrates how creativity and advanced string manipulation techniques in CMD can overcome significant limitations, making automation more robust and predictable.
However, it is crucial to acknowledge that, given Batchâs parsing complexities and structural constraints â especially in the absence of Delayed Expansion â achieving truly secure and reliable handling of arbitrary string inputs remains a persistent challenge. This is largely due to CMDâs tendency to interpret special characters in unexpected ways, a direct consequence of its parsing model. Consequently, while ingenious solutions exist, the predictability and reliability of automation are more reliably achieved under controlled conditions â such as by carefully avoiding problematic characters or applying specific escaping techniques.
I'm afraid that ComponentCollection
may not be used to integrate third-party React components within the SurveyJS Form Library. To integrate a third-party React component within SurveyJS Form Library, implement a custom question model and rendrerer which would render your EditView
. Please follow this tutorial: Integrate Third-Party React Components
After check, Google App Script will redirect the response for doPost, and this is not supported by Google Appsheet.
https://www.googlecloudcommunity.com/gc/AppSheet-Q-A/Use-return-values-from-webhooks-conversion-error/m-p/772956/highlight/true
So the workaround is to change the Webhook to Call Google App Script directly.
BTW, the original idea is proposed by AI so it doesn't know this limitation and cost me a day.
The order of FirebaseApp.configure()
and GMSServices.provideAPIKey()
can matter. Try this sequence:
GMSServices.provideAPIKey("YOUR_GOOGLE_MAPS_API_KEY")
FirebaseApp.configure()
The Manifest.toml file updates when you do certain things in Julia:
Adding a package
Removing a package
Updating packages
If you use a different Julia version than the project version.
Editing a package locally can add a special path to Manifest.toml.
So I think you can check if the versions are same and avoid modify packages unless it is necessary.
I believe now both Apple & Google allow for alternate billing
Google Play: https://developer.android.com/google/play/billing/alternative
Apple: https://developer.apple.com/support/apps-using-alternative-payment-providers-in-the-eu/
`Thread.Sleep()` blocks the current thread and should be avoided in most applications, especially UI (WinForms/WPF) or ASP.NET apps, as it can freeze the interface or waste server threads.
This code for sleep current thread for 10 second.
System.Threading.Thread.Sleep(10);
Could you specify the exact SCADA / historian product (and version) youâre using?
Different vendors expose different protocolsâOPC UA, MQTT, proprietary SQL APIs, etc.âand several of them already ship with Azure connectors or can publish straight to IoT Hub/Event Hubs without a separate broker.
For most OT applications just MQTT or OPC UA Publisher is good enough. Why do you need the throughput of Kafka in your application?
It seems like there is some compatibility conflicts with your libraries. I tried to replicate the code and its working fine in the latest versions of tensorflow and keras. So, Please try to upgrade the tensorflow to the latest version. Kindly refer to this gist and this Tested build configurations to use compatible versions.
SELECT
COUNT(CASE WHEN Status = 'Pending' THEN 1 END) AS Pending,
COUNT(CASE WHEN Status = 'Delivered' THEN 1 END) AS Delivered,
COUNT(CASE WHEN Status = 'Cancelled' THEN 1 END)
AS Cancelled FROM Orders;
Had the same problem today, caused by a update of the (external) server API which changed its CORS settings.
Looking at the network tab of the developer tools in the browser made it seem like everything was OK (200), only in the console it showed the problem.
Further reads which helped me find the problem:
CORS - Is it a client-side thing, a server-side thing, or a transport level thing?
How to solve 'Redirect has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header'?
If you want laravel to reload code/blade as soon as running a job, you need to use queue:listen instead of queue:work.
Just use search() method and it will stop at the first match.
The python uses
import re
re.search(r'somepattern', string)
Laravel Pint doesn't have a configurable option to use tabs instead of spaces. Instead of Pint, you can use PHP CS Fixer directly with a custom configuration that disables indentation rules.
You need to have a selectedNodes array and it should work:
<p-tree [value]="myObjectNodes" selectionMode="checkbox" [(selection)]="selectedObjectNodes" />
You can also use RandomAccessFile
:
try (final RandomAccessFile raf = new RandomAccessFile("fileToTruncate", "rw")) {
final long size = raf.length();
if (size > 0) {
raf.setLength(size - 1);
}
} catch (final IOException ignored) {
}
It seems like you haven't created the symbolic link between storage/app/public
and public/storage
. Laravel requires this link to make files stored in storage/app/public
accessible via the web. Run this command - php artisan storage:link
. You can read more about the link here
Open the app settings. (You will see this list)
Go to Open by Default List item. (This screen will open)
Enable the Open supported links and add the links by tapping on (+ Add Link). (See image)
Now, you can tap on the links, it will open the app.
You should restart web server for example apache, nginx, lsws, and so on and just have to restart the page of browser.
# systemctl restart httpd
If you use nginx, don't forget to restart php-fpm.
I think I could reproduce your 403 exception. Try to add @ResponseBody
annotation on the POST method in the controller.
Looks like 9.0.2 version of Elasticsearch is not compatible with ElasticsearchSinkConnector.
Once I switched to 8.x version (8.12.0) everything started to worked.
In fact you would just need to install aardvark-dns via the package manager of your distribution (thanks to podman issue tracker issue 15848):
ArchLinux/Manjaro: sudo pacman -S aardvark-dns
Debian/Ubuntu: sudo apt install aardvark-dns
And the warning goes away.
Try using:
Select::make('relationship')
->hidden()
->saveRelationshipsWhenHidden()
It's nowhere in the documentation but it does exactly it says. Also available is:
Select::make('relationship')
->disabled()
->saveRelationshipsWhenDisabled()
Adding below code to program.cs resolved my issue:
builder.Services.AddHttpClient();
Good read. I also use a step in my task sequence that does the import - module and oddly it works amazingly in all models except recently not on the Lenovo M920s. When the step runs, the screen goes blank and then nothing. Not sure why, but this only happens when I image that model. I ended up creating a second TS that does not use that step. I use this so I can do a auto computer name generator that pulls the last five of the serial number, then adds the device to AD in a specific OU based on the device type. The process also checks of the device already exists in AD and prompts if this is reimage to check a box. Anyone experienced this blank screen issue? I am going to try rebuilding the package with new nuget and see if that helps.
For me this was occurring due to the way WSL handles files, it seems to conflict with the way pnpm uses symlinks. To fix this, I set the node linker to be hoisted.
In .npmrc file in the root of your project add the line
node-linker=hoisted
Bigtable now supports Continuous Materialized Views for write-time, incrementally processed pre-aggregations and SQL GROUP BYs for read-time aggregations which make these kind of operations much easier.
I tried my code in another laptop, it worked flawlesly. There must be something wrong with my work-laptop
This answer is written in Jun ,2025.
solution for me was to upgrade both compileSdkVersion and targetSdkVersion in build.gradle app level.
to 33
Any luck on this? It seems like a silly thing to be missing if there really is no full screen option
There is no free api for DL and Vehicle Details. For prod we can give details at low cost. if needed let me know.
I've just opened their properties and made them 'Hidden' so I don't see them. :)
loginUsername = input("Cunso: ")
loginPassword = input(" JanganDibuka#08")
data=open('database.txt', 'r')
accounts = data.readlines()
for line in data:
accounts = line.split(",")
if (loginUsername == accounts[0] and loginPassword == accounts[1]):
print("LOGGED IN")
else:
print("Login SUCCES")
print(accounts)
How to check the username in text file or not and then ask for the password?
Asked 2 years, 7 months ago
Modified 2 years, 7 months ago
Viewed 3k times
2
loginUsername = input("Enter Username: ")
loginPassword = input("Enter PASSWORD: ")
data=open('database.txt', 'r')
accounts = data.readlines()
for line in data:
accounts = line.split(",")
if (loginUsername == accounts[0] and loginPassword == accounts[1]):
print("LOGGED IN")
else:
print("Login FAILED")
print(accounts)
I want to make a text login system, which will ask for the username first. After checking the text file which stored username and password, the system will ask for password. But I don't know how to read the first column (which is username, the structure of the text file is "username, password"). If i use readlines() and split(","). But there is "n" at the end of the password.
pythonjupyter-notebook
Share
Improve this question
Follow
asked Nov 8, 2022 at 12:56
Mike's user avatar
Mike
2111 silver badge22 bronze badges
What is: accounts = data.readlines()? Surely this exhausts the file. â
quamrana
CommentedNov 8, 2022 at 13:09
Welcome to stackoverflow Mike. If the answer you received solved your issue, you can mark it as "correct", by clicking on the check mark beside the answer, to toggle it from greyed out to filled in. â
Andreas Violaris
CommentedNov 8, 2022 at 21:04
Add a comment
Report this ad
2 Answers
Sorted by:
Highest score (default)
3
# You should always use CamelCase for class names and snake_case
# for everything else in Python as recommended in PEP8.
username = input("Cunso: ")
password = input("JanganDibuka#08: ")
# You can use a list to store the database's credentials.
credentials = []
# You can use context manager that will automatically
# close the file for you, when you are done with it.
with open("database.txt") as data:
for line in data:
line = line.strip("\n")
credentials.append(line.split(","))
authorized = False
for credential in credentials:
db_username = credential[0]
db_password = credential[1]
if username == db_username and password == db_password:
authorized = True
if authorized:
print("Login Succeeded.")
else:
print("Login Failed.")
mystring = "password\n"
print(mystring.rstrip())
>>> 'password'
I had a similar issue but mine DB column definition is int2
, so I had to use ::smallint
to cast the type.
enter image description hereI have created a username with a password and still left 3 other "root" users, I really do not want
someone login from outside to my database, so with "root I can delete or modify privileges (how so) to
accomplish this?
I attached a picture.
Get back....
Okay, if you want to split a set of N features (F) into two complementary subsets (S1, S2), and you have a complementarity score C(f_i, f_j) between any two features f_i and f_j:
Goal is to Maximize the total complementarity between S1 and S2.
say, Total_Complementarity(S1, S2) = sum(C(f_i, f_j) for f_i in S1 for f_j in S2)
Greedy Algorithm:
Initialize S1 and S2 (e.g., S1 with one arbitrary feature, S2 with the rest).
Iteratively move a feature from one subset to the other if that move increases Total_Complementarity(S1, S2).
Or, start with S1 empty, S2 = F. Iteratively move the feature from S2 to S1 that results in the largest increase in Total_Complementarity. Stop when S1 has N/2 features, or when no move improves the score.
Your looking at documentation for Basic Display API. Graph API does not allow refresh of long lived tokens, instead you need to create a permanent token via a System User in Business Manager, assigned to your Meta App with the appropriate assets and permissions, and generate a System User access token using an App ID, App Secret, and the system userâs generated token
Solution provided by @Phil in the comment section. Update base config as well as router's basename with my app name, then update Tomcat config as mentioned in this post
Just type { ...localStorage } in the devtools to see all values. Its the same to sessionStorage.
Note that with autoconf+automake, CFLAGS belongs to the end-user, not the package developer or the package system. Your changes belong in AM_CFLAGS, AM_CXXFLAGS, AM_CPPFLAGS, AM_LDFLAGS where the end-user can override them with the non-AM_ variables on the configure command line or in the environment when configure is run. For details, see:
https://www.gnu.org/software/automake/manual/html_node/Flag-Variables-Ordering.html
I have the same problem, do you know how to solve it?