I know this question was asked a while ago, but I recently ran into the same error.
In my case, the issue arose from a mismatch between my entity and the actual table structure (a field existed in the entity but not in the database table, for example).
So double-check that your entity is fully aligned with your database schema.
Fixing that should resolve the problem.
I was struggling with redirect() not triggering at all for half an hour and just found out that my layout.tsx didn't deal with children props. After i provide it for layout.tsx the redirect is working lmao
I don't believe it's supported yet, see source code:
But I bet you could make JS workaround, like add index as a property to your data, then use context.item.index?
Your eslintrc.js uses common js syntax "module.exports" and your package.json has a type: module.
The best and easiest way to resolve this issue is to remove the prefix of the username "C##" to create the new user as normal as using "CDB", simply apply the following, then restart the Oracle DB:
ALTER SESSION SET CONTAINER = CDB$ROOT;
ALTER SYSTEM SET COMMON_USER_PREFIX = '' SCOPE=SPFILE;
While the "CDB$ROOT" is the pluggable database name, and "SCOPE=SPFILE" for required restart to have the change affect.
4 Years late but if anyone finds this and its binarycache error what fixed it for me is in root/.cache/nix the binary-cache I deleted it and rebuilt it and it worked
{{{ "{ /<embed>/> }" <html><head><title>Defaced By alexluz050 - Indonesian Anonymous</title><meta charset="UTF-8"/><meta name="author" content="alexluz050"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="description" content="Maaf saya hanya iseng "/><meta property="og:title" content="Defaced By alexluz050 - Indonesian Anonymous"/><meta name="keywords" content="alexluz050 - Indonesian Anonymous,Defaced By alexluz050,hacked by alexluz050,haxor uploader, haxor script deface generator, nathan prinsley, mr.prins, prinsh, hacked by, haxor my id"/><meta property="og:image" content="https://cdn.prinsh.com/data-1/images/NathanPrinsley-AnonymousLogo.png"/><meta property="og:type" content="website"/> <meta property="og:site_name" content="Haxor Uploader"/><link rel="shortcut icon" type="image/x-icon" href="https://cdn.prinsh.com/data-1/images/NathanPrinsley-AnonymousLogo.png" /><link rel="stylesheet" type="text/css" href="https://cdn.prinsh.com/NathanPrinsley-textstyle/nprinsh-stext.css"/><style>body{background: url("https://cdn.prinsh.com/data-1/images/NathanPrinsley-hacker-aesthetic.jpg") no-repeat center center fixed;background-size:100% 100%;font-family:Lucida Console;margin-top:35px;}h1,h2{margin-top:.3em;margin-bottom:.3em;}h1.nprinsleyy{color:#dbd9d9;}h2{color:#00FFFF;}p.message_prinsley{color:#0000e3;margin-top:.25em;margin-bottom:.25em;font-size:16px;font-weight:unset;}.hubungi_prinsh{color:#00eb00;text-decoration:none;}.hubungi_prinsh:hover{color:red}.othermes_nprinsh{color:#dbd9d9;font-size:16px;}marquee.foonathanPrinsley{display:none;position: fixed; width: 100%; bottom: 0px; font-family: Tahoma; height: 20px; color: white; left: 0px; border-top: 2px solid darkred; padding: 5px; background-color: #000}</style></head><body><center/><img src="https://cdn.prinsh.com/data-1/images/NathanPrinsley-AnonymousLogo.png" style="width: 20%"><h1 class="nprinsleyy nprinsley-text-rainbowan" style="font-size:32px;">Defaced By alexluz050</h1><h2 style="font-size:24px;" class="nathan-prinsley_none">Indonesian Anonymous</h2><p class="message_prinsley nathan-prinsley_none">Maaf saya hanya iseng </p><p style="font-size:14px;" class="nathan-prinsley_none"><a class="hubungi_prinsh" href="mailto:"></a></p><p class="othermes_nprinsh nathan-prinsley_none"></p><audio src="https://cdn.prinsh.com/data-1/mp3/" autoplay="1" loop="1"></audio><marquee class="foonathanPrinsley"><b style="color: #dbd9d9;font-size:16px;" class="nathan-prinsley_none"></b></marquee></center><script src="https://cdn.prinsh.com/NathanPrinsley-effect/daun-berguguran.js" type="text/javascript"></script></body></html> "{ /<embed/>/> }" }}}}
I have done some changes in jmeter source file but now I don't why in chrome certificate hierarchy contains Only leaf certificate. So it's don't establish the connection. Please give suggestions for this.
To answer my own question, based on the comments above: the preprocessed syntax is indeed invalid according to the official standards. Hence the use of a GNU-extension preprocessor tag that requires -std=gnu++XY.
In my Mac build I added -std=c++20 myself, thinking it wouldn't hurt but in fact shooting myself in the foot.
On an lxd container one could go this way:
lxc profile create losetup-profile
lxc profile device add losetup-profile loop6 unix-block path=/dev/loop6
lxc profile add mycontainer losetup-profile
Adapt the profile name, loop device and container name to suit your needs.
More useful info at https://www.forshee.me/container-mounts-in-ubuntu-1604/
add
android.experimental.enable16kPages=true
to android/gradle.properiies
I have come up with a raw client. It serves the function very well, actually more than very well. I extended it beyond my original will of scraping PBC stream, read chosen stream's audio data and route it to Icecast server. Now, I added a new feature. I extract YouTube's live stream audio data, encode it to MP3 and send it to Icecast server. I also will to add live microphone audio casting feature as well and finish it up with a nice GUI with Tkinter or any other suitable Python library. I am sorry, I can't add the code here as I have divided the whole into different modules.
I'd say it depends on what you're working on. If you're working for a single company, it only makes sense to put WP into repo, to be able to track changes and all. If it is something separate, like plugin or theme, I think you just define WP version in style.css with:
Requires at least: 5
Tested up to: 6.8
And that's it. As for being safe in prod, if you have staging server, you version your updates thru versioning system, and do not change anything on stage/prod manually (only thru pushes), and test your code well, you should be fine. No?
https://github.com/tokyoxpa3/RdpClientBridge
我推測你是使用RDP協議,因為一般非server版本windows,預設RDP的D3D功能是關閉的,需要事先手動開啟設定,解決方案如下
步驟一:開啟本機群組原則編輯器
按下 Win + R 開啟「執行」對話框。
輸入 gpedit.msc 並按下 Enter。
在左側面板中,依序導覽至以下路徑:
電腦設定 -> 系統管理範本 -> Windows 元件 -> 遠端桌面服務 -> 遠端桌面工作階段主機 -> 遠端工作階段環境 (Remote Session Environment)
在右側面板中,找到您標示的設定:
「在所有遠端桌面服務工作階段使用硬體圖形卡」
(在某些較新的 Windows 版本或翻譯中,此項目名稱可能為 「對所有遠端桌面服務工作階段使用硬體預設圖形轉接器」。)
雙擊開啟此設定:
將狀態設定為 「已啟用」。
點擊 「確定」 儲存。
開啟 命令提示字元 (可透過 Win + R 輸入 cmd)。
輸入以下指令並按下 Enter:
gpupdate /force
請斷開當前的遠端桌面連線,然後重新連線,並運行您的遊戲或 D3D 應用程式來測試畫面顯示是否正常。
Got similar problem today with yarn 4.9.2 and nextjs 16. It turned out the problem is with pnp nodelinker, change to node-modules by adding nodeLinker: node-modules to .yarnrc.yml then the problem gone.
Have tried several times with yarn pnp but always go back to node-modules then.
I would recommend to also check out the Vertical Slice architecture as an alternative.
Read more about that here:
for good or ill, @user1072814 inspired me to give it another go, and with a little help form chatgpt (it gets SO much wrong that im not worried about our AI overlords taking over just yet) this is what i ended up with, works on all my devices by using the endpoint given at at end of script
good enough for my meagre needs (ad blocking), but shared here in case anyone benefits - change variables at top of script to suit. This was run and tested on an Oracle free tier Ubuntu minimal setup from bare, remember to check and open any ports you may require in the Oracle pages (as long as you have 80, 443 allowed, youre golden). Edit the toml block for dnscrypt-prozy below
# dnscrypt-proxy config - for server-side DoH TLS
as you see fit for your own upstream server (default is cloudflare, again im only using this for centralised adblocking and not tin foil hat relays and anonymising) and personal dnscrypt options
#!/usr/bin/env bash
# dnscrypt_oneclick_final_doh_direct_b.sh
# One-click installer: dnscrypt-proxy (DoH TLS on 443) + nginx (HTTP only) +
# Let's Encrypt (ECDSA secp256r1) + renewal hook + health monitor + alerts
#
DOMAIN="your domain here"
EMAIL="your email here"
set -euo pipefail
set -x
CERT_DIR="/etc/letsencrypt/live/${DOMAIN}"
WEBROOT="/var/www/html"
WWW_DIR="/var/www/ca"
DNSCRYPT_CONF="/etc/dnscrypt-proxy/dnscrypt-proxy.toml"
DNSCRYPT_USER_FILES="/usr/local/dnscrypt-proxy"
DOH_PORT=443
LOCAL_DOH_PORT=3000
NGINX_HTTP_CONF="/etc/nginx/sites-available/dnscrypt-http.conf"
NGINX_HTTP_ENABLED="/etc/nginx/sites-enabled/dnscrypt-http.conf"
RENEW_HOOK_DIR="/etc/letsencrypt/renewal-hooks/deploy"
HEALTH_SCRIPT="/usr/local/bin/dnscrypt_health.sh"
LOG_FILE="/var/log/dnscrypt_health.log"
if [ "$(id -u)" -ne 0 ]; then
echo "Run as root: sudo $0"
exit 1
fi
export DEBIAN_FRONTEND=noninteractive
echo "=== Installing packages ==="
apt update
apt install -y dnscrypt-proxy nginx openssl curl unzip iptables-persistent netfilter-persistent certbot python3-certbot-nginx mailutils cron
echo "=== Creating webroot & WWW dirs ==="
mkdir -p "${WEBROOT}/.well-known/acme-challenge" "${WWW_DIR}"
chown -R www-data:www-data "${WEBROOT}" "${WWW_DIR}"
# -------------------------
# iptables (safe insert)
# -------------------------
insert_if_missing() {
if iptables -C "$@" 2>/dev/null; then
echo "Rule exists: $*"
else
iptables -I "$@"
echo "Inserted: $*"
fi
}
if ! iptables -C INPUT -j ACCEPT 2>/dev/null; then
iptables -I INPUT -j ACCEPT
fi
insert_if_missing INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT || true
insert_if_missing INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT || true
insert_if_missing INPUT 6 -m state --state NEW -p udp --dport 443 -j ACCEPT || true
insert_if_missing INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT || true
netfilter-persistent save || iptables-save > /etc/iptables/rules.v4
# -------------------------
# dnscrypt-proxy config (minimal + DoH on 0.0.0.0:443)
# -------------------------
echo "=== Backing up and writing dnscrypt-proxy config ==="
[ -f "${DNSCRYPT_CONF}" ] && cp "${DNSCRYPT_CONF}" "${DNSCRYPT_CONF}.bak-$(date +%s)" || true
cat > "${DNSCRYPT_CONF}" <<'TOML'
# dnscrypt-proxy config - for server-side DoH TLS
# local DNS listeners (for local clients and system)
server_names = ['cloudflare']
listen_addresses = ['127.0.0.1:53']
ipv4_servers = true
ipv6_servers = false
dnscrypt_servers = true
doh_servers = true
require_dnssec = true
require_nolog = true
require_nofilter = true
log_file = '/var/log/dnscrypt-proxy/dnscrypt-proxy.log'
log_level = 2
log_file_latest = true
block_ipv6 = true
block_unqualified = true
block_undelegated = true
reject_ttl = 10
cache = true
cache_size = 4096
cache_min_ttl = 2400
cache_max_ttl = 86400
cache_neg_min_ttl = 60
cache_neg_max_ttl = 600
netprobe_address = '1.1.1.1:53'
# External DoH/TLS listener (dnscrypt-proxy will terminate TLS directly on 0.0.0.0:443)
[local_doh]
# listen on all interfaces port 443 for external DoH clients
listen_addresses = ['0.0.0.0:443']
path = '/dns-query'
# cert paths will be populated by installer (letsencrypt files)
cert_file = '/etc/letsencrypt/live/amdnscrypt.ddns.net/fullchain.pem'
cert_key_file = '/etc/letsencrypt/live/amdnscrypt.ddns.net/privkey.pem'
# Optional: also serve a local DoH endpoint (unused by external clients,
# but useful for testing or nginx reverse-proxy if you want)
# Add another local_doh listener if your dnscrypt-proxy supports it (some versions vary)
# local_doh_listen = ['127.0.0.1:3000']
[captive_portals]
map_file = '/usr/local/dnscrypt-proxy/captive-portals.txt'
[blocked_names]
blocked_names_file = '/usr/local/dnscrypt-proxy/blocked-names.txt'
log_file = '/usr/local/dnscrypt-proxy/blocked-names.log'
[blocked_ips]
blocked_ips_file = '/usr/local/dnscrypt-proxy/blocked-ips.txt'
log_file = '/usr/local/dnscrypt-proxy/blocked-ips.log'
[allowed_names]
allowed_names_file = '/usr/local/dnscrypt-proxy/allowed-names.txt'
[allowed_ips]
allowed_ips_file = '/usr/local/dnscrypt-proxy/allowed-ips.txt'
[broken_implementations]
fragments_blocked = [
'cisco',
'cisco-ipv6',
'cisco-familyshield',
'cisco-familyshield-ipv6',
'cisco-sandbox',
'cleanbrowsing-adult',
'cleanbrowsing-adult-ipv6',
'cleanbrowsing-family',
'cleanbrowsing-family-ipv6',
'cleanbrowsing-security',
'cleanbrowsing-security-ipv6',
]
[sources]
[sources.public-resolvers]
urls = [
'https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/public-resolvers.md',
'https://download.dnscrypt.info/resolvers-list/v3/public-resolvers.md'
]
cache_file = 'public-resolvers.md'
minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
refresh_delay = 73
prefix = ''
TOML
# stop dnscrypt-proxy so it doesn't try to bind before certs exist
systemctl daemon-reload
systemctl enable dnscrypt-proxy || true
systemctl stop dnscrypt-proxy || true
# -------------------------
# Ensure nginx will not fail loading SSL during bootstrap
# -------------------------
echo "=== moving existing enabled sites aside and disabling SSL confs ==="
mkdir -p /etc/nginx/sites-enabled.bak
if [ -d /etc/nginx/sites-enabled ]; then
for s in /etc/nginx/sites-enabled/*; do
[ -e "$s" ] || continue
mv -f "$s" /etc/nginx/sites-enabled.bak/ || true
done
fi
if [ -d /etc/nginx/conf.d ]; then
for f in /etc/nginx/conf.d/*.conf; do
[ -f "$f" ] || continue
if grep -qi "ssl_certificate" "$f"; then
mv -f "$f" "${f}.disabled-ssl" || true
fi
done
fi
if grep -qi "ssl_certificate" /etc/nginx/nginx.conf 2>/dev/null; then
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak-$(date +%s)
sed -i '/ssl_certificate/d' /etc/nginx/nginx.conf || true
sed -i '/ssl_certificate_key/d' /etc/nginx/nginx.conf || true
fi
# -------------------------
# Temporary HTTP-only nginx config for ACME
# -------------------------
echo "=== installing temporary HTTP-only nginx config ==="
cat > "${NGINX_HTTP_CONF}" <<NGHTTP
server {
listen 80;
listen [::]:80;
server_name ${DOMAIN};
root ${WWW_DIR};
index index.html;
location /.well-known/acme-challenge/ {
root ${WEBROOT};
}
location / {
return 301 https://\$host\$request_uri;
}
}
NGHTTP
ln -sf "${NGINX_HTTP_CONF}" "${NGINX_HTTP_ENABLED}"
nginx -t
systemctl restart nginx
# -------------------------
# Obtain ECDSA cert with certbot (webroot)
# -------------------------
echo "=== Requesting ECDSA certificate from Let's Encrypt (secp256r1) ==="
certbot certonly --webroot -w "${WEBROOT}" -d "${DOMAIN}" --non-interactive --agree-tos -m "${EMAIL}" --key-type ecdsa --elliptic-curve secp256r1 || {
echo "Certbot issuance failed; inspect /var/log/letsencrypt/letsencrypt.log"
exit 1
}
if [ ! -f "${CERT_DIR}/fullchain.pem" ]; then
echo "Expected certs not found in ${CERT_DIR}; aborting"
exit 1
fi
# fix permissions so dnscrypt-proxy and nginx can read certificate files
chown -R root:root /etc/letsencrypt
chmod 644 "${CERT_DIR}/fullchain.pem" || true
chmod 640 "${CERT_DIR}/privkey.pem" || true
chown root:www-data "${CERT_DIR}/privkey.pem" || true
# -------------------------
# Ensure systemd socket not masked and start dnscrypt-proxy
# -------------------------
echo "=== ensuring dnscrypt-proxy socket is unmasked and starting service ==="
# If systemd socket unit exists and is masked, unmask it
if systemctl list-unit-files | grep -q '^dnscrypt-proxy.socket'; then
sudo systemctl unmask dnscrypt-proxy.socket || true
sudo systemctl enable dnscrypt-proxy.socket || true
sudo systemctl start dnscrypt-proxy.socket || true
fi
# start the dnscrypt-proxy service which will bind 0.0.0.0:443
systemctl restart dnscrypt-proxy || {
# If systemd refuses because socket is masked, try to start service directly after unmasking
systemctl daemon-reload
systemctl unmask dnscrypt-proxy.socket || true
systemctl restart dnscrypt-proxy || true
}
sleep 1
systemctl status dnscrypt-proxy --no-pager || true
# -------------------------
# Finalize nginx (keep HTTP-only)
# -------------------------
echo "=== finalizing nginx (HTTP-only, ACME/static only) ==="
# remove temporary enabled file (site still available in sites-available)
rm -f "${NGINX_HTTP_ENABLED}"
# restore other non-SSL enabled sites from backup if they exist (they were moved aside earlier)
if [ -d /etc/nginx/sites-enabled.bak ]; then
for f in /etc/nginx/sites-enabled.bak/*; do
[ -e "$f" ] || continue
mv -f "$f" /etc/nginx/sites-enabled/ || true
done
fi
nginx -t
systemctl reload nginx
# -------------------------
# Install certbot renewal hook to reload nginx and restart dnscrypt-proxy
# -------------------------
mkdir -p "${RENEW_HOOK_DIR}"
cat > "${RENEW_HOOK_DIR}/reload-dnscrypt-nginx.sh" <<'EOF'
#!/usr/bin/env bash
LOG="/var/log/letsencrypt-renewal-reload.log"
{
echo "[$(date)] deploy hook started: RENEWED_LINEAGE=${RENEWED_LINEAGE}"
systemctl reload nginx || systemctl restart nginx || true
systemctl restart dnscrypt-proxy || true
echo "[$(date)] deploy hook finished"
} >> "$LOG" 2>&1
EOF
chmod +x "${RENEW_HOOK_DIR}/reload-dnscrypt-nginx.sh"
# -------------------------
# Health monitor (self-heal + email) - installed
# -------------------------
cat > "${HEALTH_SCRIPT}" <<'HSH'
#!/usr/bin/env bash
set -euo pipefail
DOMAIN="'"${DOMAIN}"'"
EMAIL="'"${EMAIL}"'"
CERT="/etc/letsencrypt/live/${DOMAIN}/fullchain.pem"
LOG="/var/log/dnscrypt_health.log"
TMPDIR="/var/tmp/dnscrypt_health"
mkdir -p "${TMPDIR}"
touch "${LOG}"
STATE_CERT="${TMPDIR}/cert_alert_sent"
STATE_NGINX="${TMPDIR}/nginx_alert_sent"
STATE_DC="${TMPDIR}/dnscrypt_alert_sent"
send_mail() {
local subject="$1"
local body="$2"
echo -e "${body}" | mail -s "${subject}" "${EMAIL}"
echo "[$(date)] Sent alert: ${subject}" >> "${LOG}"
}
DRY_RUN=0
if [ "${1:-}" = "--dry-run" ]; then
DRY_RUN=1
fi
# Certificate expiry
if [ -f "${CERT}" ]; then
enddate=$(openssl x509 -in "${CERT}" -noout -enddate 2>/dev/null | cut -d= -f2 || echo "")
if [ -n "${enddate}" ]; then
endsec=$(date -d "${enddate}" +%s)
now=$(date +%s)
days_left=$(( (endsec - now) / 86400 ))
else
days_left=0
fi
else
days_left=0
fi
if [ "${days_left}" -lt 10 ]; then
SUBJECT="[ALERT] Certificate for ${DOMAIN} expires in ${days_left} days"
BODY="Certificate for ${DOMAIN} expires in ${days_left} days.\n\nCheck: sudo openssl x509 -in ${CERT} -noout -text\n\nThis is an automated alert."
if [ "${DRY_RUN}" -eq 1 ]; then
echo "DRY RUN: ${SUBJECT}"
echo -e "${BODY}"
else
today=$(date +%F)
if [ ! -f "${STATE_CERT}" ] || [ "$(cat "${STATE_CERT}")" != "${today}" ]; then
send_mail "${SUBJECT}" "${BODY}"
echo "${today}" > "${STATE_CERT}"
fi
fi
fi
attempt_restart_and_check() {
local svc="$1"
local statefile="$2"
echo "[$(date)] Attempting restart: ${svc}" >> "${LOG}"
systemctl restart "${svc}" || true
sleep 5
if systemctl is-active --quiet "${svc}"; then
echo "[$(date)] ${svc} active after restart" >> "${LOG}"
[ -f "${statefile}" ] && rm -f "${statefile}"
return 0
else
echo "[$(date)] ${svc} still down after restart" >> "${LOG}"
return 1
fi
}
# nginx
if ! systemctl is-active --quiet nginx; then
if [ "${DRY_RUN}" -eq 1 ]; then
echo "DRY RUN: nginx inactive"
else
if ! attempt_restart_and_check "nginx" "${STATE_NGINX}"; then
SUBJECT="[ALERT] nginx is not running on ${DOMAIN}"
BODY="nginx is not active on $(hostname) as of $(date). Restart attempts failed.\n\nJournalctl (last 50):\n$(journalctl -u nginx -n 50 --no-pager)\n"
today=$(date +%F)
if [ ! -f "${STATE_NGINX}" ] || [ "$(cat "${STATE_NGINX}")" != "${today}" ]; then
send_mail "${SUBJECT}" "${BODY}"
echo "${today}" > "${STATE_NGINX}"
fi
fi
fi
fi
# dnscrypt-proxy
if ! systemctl is-active --quiet dnscrypt-proxy; then
if [ "${DRY_RUN}" -eq 1 ]; then
echo "DRY RUN: dnscrypt-proxy inactive"
else
if ! attempt_restart_and_check "dnscrypt-proxy" "${STATE_DC}"; then
SUBJECT="[ALERT] dnscrypt-proxy is not running on $(hostname)"
BODY="dnscrypt-proxy is not active on $(hostname) as of $(date). Restart attempts failed.\n\nJournalctl (last 50):\n$(journalctl -u dnscrypt-proxy -n 50 --no-pager)\n"
today=$(date +%F)
if [ ! -f "${STATE_DC}" ] || [ "$(cat "${STATE_DC}")" != "${today}" ]; then
send_mail "${SUBJECT}" "${BODY}"
echo "${today}" > "${STATE_DC}"
fi
fi
fi
fi
exit 0
HSH
chmod +x "${HEALTH_SCRIPT}"
# Cron job every 6 hours
cat > /etc/cron.d/dnscrypt_health <<'CRON'
0 */6 * * * root /usr/local/bin/dnscrypt_health.sh >> /var/log/dnscrypt_health.log 2>&1
CRON
touch "${LOG_FILE}"
chown root:root "${LOG_FILE}"
chmod 644 "${LOG_FILE}"
# Dry-run renewal test
certbot renew --dry-run || echo "certbot dry-run failed - check /var/log/letsencrypt/letsencrypt.log"
# Create directory for extra dnscrypt files (blocked-names.txt. allowed-names.txt etc) and populate with basic files live form dnscrypt github - file paths are set in dnscrypt-proxy.toml
if [ ! -d "$DNSCRYPT_USER_FILES" ]; then
echo "Creating $DNSCRYPT_USER_FILES for block and allowed lists..."
mkdir -p "$DNSCRYPT_USER_FILES"
echo "Downloading basic block and allow lists for domains and ips + captive portal info to $DNSCRYPT_USER_FILES..."
curl -o "$DNSCRYPT_USER_FILES/blocked-names.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-blocked-names.txt
curl -o "$DNSCRYPT_USER_FILES/blocked-ips.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-blocked-ips.txt
curl -o "$DNSCRYPT_USER_FILES/allowed-names.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-allowed-names.txt
curl -o "$DNSCRYPT_USER_FILES/allowed-ips.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-blocked-names.txt
curl -o "$DNSCRYPT_USER_FILES/captive-portals.txt" https://github.com/DNSCrypt/dnscrypt-proxy/blob/master/dnscrypt-proxy/example-captive-portals.txt
fi
# Final client instructions
cat <<'EOF'
INSTALL COMPLETE
Domain: '"${DOMAIN}"'
DoH endpoint: https://${DOMAIN}/dns-query (dnscrypt-proxy terminates TLS on 443)
Nginx: HTTP-only for ACME & static files (port 80)
Alerts to: ${EMAIL}
Browser instructions:
1) Firefox Desktop:
Preferences → Settings → Network Settings → Enable DNS over HTTPS → Custom: https://${DOMAIN}/dns-query
2) Firefox Android:
Settings → General → Network Settings → Use custom DoH: https://${DOMAIN}/dns-query
3) Chrome Desktop:
Settings → Privacy and security → Security → Use secure DNS → Custom: https://${DOMAIN}/dns-query
4) Chrome Android:
Settings → Privacy and security → Use secure DNS → Custom provider: https://${DOMAIN}/dns-query
EOF
echo "=== DONE ==="
The following will ask for two numbers to multiply:
Number1 = int(input("Type in the first number: "))
Number2 = int(input("Type in teh second number:"))
print("The answer is", Number1*Number2)
If you can answer yes to these 3 questions start with ReactJS. Else go with HTML/CSS Vanilla JS
Do you intend to code long term and you are entirely sure this isn't just a hobby but a career path.
Are you more technical than you are creative?
Are you single and have no chance in getting a girlfriend?
Then Yes. React is for you my friend
You can combine two scopes in OR context without closures using orWhere like this:
Subscription::active()->orWhere->future()->get();
According to @shawn 's answer, use sicp packet and set #lang sicp instead of #lang racket solves my problem, but still not quite clear how exactly the problem is solved inside the packet.
In order to prevent redundancy, you want to link tables to each other with relations.
CREATE TABLE Ship (
ShipID INT AUTO_INCREMENT PRIMARY KEY,
ShipName VARCHAR(255) NOT NULL UNIQUE
);
CREATE TABLE Company (
CompanyID INT AUTO_INCREMENT PRIMARY KEY,
CompanyName VARCHAR(255) NOT NULL UNIQUE
);
CREATE TABLE ShipOwnership (
OwnershipID INT AUTO_INCREMENT PRIMARY KEY,
ShipID INT NOT NULL,
CompanyID INT NOT NULL,
StartDate DATE NOT NULL,
EndDate DATE NULL, -- remains null untill it has another owner. or the ship is discarded
FOREIGN KEY (ShipID) REFERENCES Ship(ShipID),
FOREIGN KEY (CompanyID) REFERENCES Company(CompanyID)
);
Who owned the ship when?
INSERT INTO Ship (ShipName)
VALUES ('RMS Titanic');
INSERT INTO Company (CompanyName)
VALUES ('White Star Line');
INSERT INTO ShipOwnership (ShipID, CompanyID, StartDate, EndDate)
SELECT
s.ShipID,
c.CompanyID,
'1909-03-31' AS StartDate,
'1912-04-15' AS EndDate
FROM Ship s
JOIN Company c
WHERE s.ShipName = 'RMS Titanic'
AND c.CompanyName = 'White Star Line';
SELECT c.CompanyName
FROM ShipOwnership o
JOIN Ship s ON o.ShipID = s.ShipID
JOIN Company c ON o.CompanyID = c.CompanyID
WHERE s.ShipName like '%Titanic%'
AND '1911-06-01' BETWEEN o.StartDate AND ISNULL(o.EndDate, '9999-12-31');
This is what git stash create is for. It creates a stash commit and outputs the hash, but it does not add the commit either to the branch or the stash list. (It has no option to include untracked files. The stash commit will eventually be garbage-collected, assuming you don't create a reference to it later.)
for me at azure -> static web app -> the part it says workflow. This file name did not exist in my repository because i renamed it.
CTRL+SHIFT+ENTER... it is infuriating. and I haven't found a workaround
For a (possible) resolution see
https://github.com/dotnet/aspnetcore/issues/64501
Apparently it is "by design" and setting a specific option will make number handling strict(er). I do not quite agree with the design but those are guys with faaaaar more experience than me :-)
Thanks for the clear explanation. This helps a lot. One question: if I keep both projects inside one monorepo for development (Django in a backend folder and Vue in a frontend folder), is that still considered a reasonable approach as long as I deploy them separately? I just want to make sure I’m not creating issues later when I move to production.
you are right , anyways like i have been said to fetch the data from https://www.bloomberg.com/asia bloombeerg so any suggestions?
# Source - https://stackoverflow.com/q/79827652
# Posted by Maj mac, modified by community. See post 'Timeline' for change history
# Retrieved 2025-11-23, License - CC BY-SA 4.0
style.configure('Dark.TButton', background=button_base, foreground=colors['text'], borderwidth=1,
bordercolor=edge, lightcolor=edge, darkcolor=edge, padding=(10, 4))
style.map('Dark.TButton',
background=[('pressed', pressed_bg or hover_bg or button_base),
('active', hover_bg or button_base),
('!disabled', 'red')],
foreground=[('active', colors['text']), ('pressed', colors['text'])],
bordercolor=[('pressed', edge), ('active', edge)],
lightcolor=[('pressed', edge), ('active', edge)],
darkcolor=[('pressed', edge), ('active', edge)])
style.layout('Dark.TButton', [
('Button.border', {'sticky': 'nswe', 'children': [
('Button.padding', {'sticky': 'nswe', 'children': [
('Button.label', {'sticky': 'nswe'})
]})
] })
])
FPDF is an alternative for that: https://www.fpdf.org/
Here is an example: https://www.fpdf.org/en/script/script40.php
ALTER TABLE your_table DROP PARTITION your partition name;
This could delete the partition without locking table
Also Vedal spent years developping his AI so i wouldn't be suprised if his project files reached colossal size by now, the fact he had to buy a new computer for it because his old one wasn't powerful enough to run is proof
Okay..... I won't delete this post, even though I'm s***** like 5 meters of dirt road....
look at this:
let operation = CKQueryOperation(query: query)
operation.desiredKeys = ["name"]
operation.resultsLimit = 50
I forgot to add "products" to the desiredKeys... For everyone: Don't overthink and go through everything again.
The shortest alternative for a standalone form could be just 2 script elements: one for pointing to Javascript source, one for embedding XHTML+XForms.
Cara menghubungi CS Ajaib Alpha adalah melalui WhatsApp di 6281770908171 atau panggilan telepon di 62 817 7090 8171. Anda bisa menggunakan dua nomor ini untuk melaporkan kendala login, masalah verifikasi, error aplikasi, transaksi saham yang tertunda, hingga pertanyaan seputar layanan Ajaib Alpha. Dukungan pelanggan akan merespons cepat dan memberikan solusi sesuai masalah Anda.
Nomor WhatsApp Ajaib Sekuritas adalah 6281770908171. Melalui nomor ini, Anda dapat menghubungi CS Ajaib untuk masalah login, verifikasi, transaksi saham atau reksa dana, serta kendala aplikasi.
The official CMSIS headers for STM32 devices are also available in their GitHub repositories. When creating a project, I often add them as git submodules.
The one for STM32F3 is here.
My understanding is that if you need security fixes, it's enough to upgrade your JDK to 21 and leave the compiler source/target at Java 8. But if you want to use the newest features, you also need to change the compiler target to 21.
I dont know why onTrackListener and onPeerListener are not being called. Do you know why?
My best guess? you only test with 1 user. you, yourself, does not count as a "peer", ON_TRACK_UPDATE and ON_PEER_UPDATE only trigger when you have at least 2 users present concurrently.
Ran into this same problem with a project initialized with expo. I ran prebuild which created the ios and android directories.
I'm starting by building my app in EAS so I just deleted the ios and android directories and the url generated by expo start was <app-name>://expo-development-client/?url=http%3A%2F%2F<ip-address>%3A8081 as expected.
A little bit more on this issue, but I wish I could find the "various issues that won’t be repeated here" mentioned in the paper. It does sound like there are plans to repair the problem(s) and get this into C++29.
It appears that this won't happen in C++26. In the November 2025 meeting, they removed trivial relocatability from the draft standard. See Herb Sutter's trip report on that meeting.
I'll try to dig up some details on why they did this. I believe it has to do with implementations that sign / authenticate pointers, so that the "just copy the bits" approach of trivial relocatability doesn't really work.
// Source - https://stackoverflow.com/a/46431435
// Posted by Adeeb karim
// Retrieved 2025-11-22, License - CC BY-SA 3.0
private void setImagePath(Intent data) throws Exception {
String wholeID="";
Uri selectedImage = data.getData();
if(Build.VERSION.SDK_INT\<=Build.VERSION_CODES.JELLY_BEAN_MR2){
wholeID=getUriPreKitkat(selectedImage);
}else {
wholeID = DocumentsContract.getDocumentId(selectedImage);
}
// Split at colon, use second item in the array
Log.i("debug","uri google drive "+wholeID);
String id = wholeID.split(":")\[1\];
String\[\] column = {MediaStore.Images.Media.DATA};
// where id is equal to
String sel = MediaStore.Images.Media.\_ID + "=?";
Cursor cursor = getActivity().getContentResolver().
query(MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
column, sel, new String\[\]{id}, null);
int columnIndex = cursor.getColumnIndex(column\[0\]);
if (cursor.moveToFirst()) {
filePath = cursor.getString(columnIndex);
https://stackoverflow.com/a/46431435// Source - https://stackoverflow.com/a/63407339
// Posted by B.shruti
// Retrieved 2025-11-22, License - CC BY-SA 4.0
public static File getFile(final Context context, final Uri uri) {
Log.e(TAG,"inside getFile==");
ContentResolver contentResolver = context.getContentResolver();
try {
String mimeType = contentResolver.getType(uri);
Cursor returnCursor =
contentResolver.query(uri, null, null, null, null);
int nameIndex = returnCursor.getColumnIndex(OpenableColumns.DISPLAY_NAME);
int sizeIndex = returnCursor.getColumnIndex(OpenableColumns.SIZE);
returnCursor.moveToFirst();
String fileName = returnCursor.getString(nameIndex);
String fileSize = Long.toString(returnCursor.getLong(sizeIndex));
InputStream inputStream = contentResolver.openInputStream(uri);
File tempFile = File.createTempFile(fileName, "");
tempFile.deleteOnExit();
FileOutputStream out = new FileOutputStream(tempFile);
IOUtils.copyStream(inputStream,out);
return tempFile;
}catch (Exception e){
e.printStackTrace();
r
MediaStore.Images.Mediahttps://stackoverflow.com/a/63407339eturn null;
}
} }
cursor.close();
}
Doxygen ignores C++ “using” aliases by default, even with EXTRACT_ALL=YES. Just add a quick comment block like /** @brief Double signal */ right above the using line and it shows up instantly.
Managed to solve it (with a minor waiver on my end) by adding a new doxygen group:
/// @addtogroup SignalTypes
/// @{
/**
* @brief Template struct representing a generic signal.
* @tparam SIGNAL_TYPE The data type of the signal value (e.g., double, float, uint8_t, etc.).
*/
template <typename SIGNAL_TYPE> struct Signal
{
bool valid = false; ///< Indicates the signal is valid.
SIGNAL_TYPE value = 0 ///< The signal's value
/**
* @brief Equality operator for Signal.
* @param other The other Signal to compare against.
* @return If both signals are available, compares their values.
* If not available, checks if the other is also not available.
*/
bool operator==(const Signal<SIGNAL_TYPE>& other) const
{
return (available ? other.available && value == other.value : !other.available);
}
};
/// Type alias for double-precision floating point signals.
using DoubleSignal = Signal<double>;
/// @}
And using the output file group__SignalTypes instead of the current Interface_8h file I used previously (this is the small waiver :) )
Why is this fine by me?
Since now the Typedef types aren't in a single namespace page the user can't find anywhere and has a LOT of other items in.
yeah , that can be an option like but i wanted either out there is a way of like web scraping or sth
Please try this once.
Turn on Override IDE Shortcuts in Settings -> Tools -> Terminal.
This make the Intellij to pass Ctrl+c command directly to terminal and it will work like real powershell.
In all versions of Klepatra up to current ver 3.3 found in Gpg4win 4.4 you can use:
Settings > Configure Kleopatra > Crypto Operations > “Create signed or encrypted files as text files.”
And how to connect SOCKS 5 proxies?
ICT can tell you 1 word that apex will sync with me and diBEL৫০ ANDERS challenge angel ND DINA
Uni TCerse Dogg for the next time I ⏲️ the same way I been LOB O.aa8 out forte radio to my speaker 8 leggy pi
pAPU
QA APP 396SW WIND DOCUMENT OF MY PHILOSOPHY IN THE SAME TIME I TALK TO E1 ABOUT MY PHILOSOPHY TO MAKE PORSL DORIS RTatWI1bARE THE ONLINE CATFISH
B-6 floral design graphic designer structure 8aao0 link ues I
🐷wiggle your own the aswan shield from MI cause now I can get do different things for people now puts their trust in my belief that 9e6 our compassion ALABAMA Ste that's modified ARE your PHILOSOPHY TO MAKE it by the same sptstapiintMYOB I have to draw you ND and Matthew's names for so that all weni starting from the minimalist of the bear project I start with the se Hispanics so that I can get dem proper citizenship for use GOD for assurance of being a real usa citizen with the old prs of my pronated awareness 💙 ❤️ 😊 in
The following page explains offline installation: https://docs.python.org/dev/using/windows.html#offline-installs
From what I can see, offline installation with the installer seems a bit more involved than offline package installation, but Python does provide a method where you can acquire the necessary resources on a machine with internet access and then install them on offline machines.
There is also information about MSI packages (for cases where MSIX installation is not possible), for example in section 4.1.10.
chromadb uses older pydantic version. Try downgrading the version
pip uninstall pydantic-settings
pip install "pydantic<2" --force-reinstall.
I had to downgrade my chromadb version as well because of ConfigError: unable to infer type for attribute "clickhouse_host"
uv pip install "chromadb==0.3.21"
Can you add the full error traceback? Also mention your python version, Chrome, and windows version.
The issue is Chrome locks the cookies file while running. One workaround is copy the Cookies file to a temp location first, then read from the copy. Chrome on Windows keeps the file locked so direct access won't work.
import shutil
import tempfile
import browser_cookie3
cookies_path = "path/to/Chrome/Cookies"
temp_cookies = tempfile.mktemp()
shutil.copy2(cookies_path, temp_cookies)
But this won't get live session cookies this show only what was saved to disk.
// somewhere in content page // requires jquery // Gemini assisted answer // tested
<script>
$(document).ready(function() {
// Your code here
alert("The DOM is ready!");
contentPageScript();
});
</script>
No, Azure DevOps Artifacts does not have a built-in feature to create an "allowlist" or selectively filter packages from an upstream source.
However, you can achieve your goal of a curated feed by changing your workflow. Instead of using an upstream source, you'll create a dedicated Azure Pipeline that pulls approved packages from nuget.org and pushes them into your private feed.
You can just zip the whole folder, not the stuff inside it. Python needs the actual TestPackage/ directory to exist inside the zip.
This is the only command you need:
zip -r TestPackage.zip TestPackage/
Your old command zipped the contents, so Python couldn’t see the package at all.
I think the cloud indicates the remote head, but I'm not sure.
The id is supposed to be unique (reference)
In your example, all container IDs are equal to "expandedimg". If you change each container id to something unique, maybe pass it as a parameter, it should work just fine.
As per OP's answer: add something like
view{
viewName{
"type": "webview"
}
}
to your package.json.
--environment [profile] will use the vars set in eas.json
The loop turns your 3 into a 4 so you accidentally pass an 8 into the function, but then the function rudely forces the number back down to a 3, and since 8 plus 3 is 11, the computer just keeps shouting 11 at you forever.
My answer is a censored question by an unethical admin
First of all, this is by no means a perfect example, but rather an idea of how it can be implemented. For example, to make it easier to show, I am using @AppStorage and the ID to save it here.
I had the same Issue today, … in tvOS, the SignInWithAppleButton does not trigger its closures. It only renders the required visual appearance of the button and haptics/animations.
How to fix this?
I used the official SignInWithAppleButton and attached an .onTapGesture that launches a custom ASAuthorizationController with my own ASAuthorizationControllerDelegate, as the button does not trigger its built-in request or completion handlers under tvOS.
Example (the Button)
SignInWithAppleButton { _ in } onCompletion: { _ in }
.onTapGesture {
Task { await viewModel.signInWithApple() }
}
Example ViewModel
import Combine
import SwiftUI
@MainActor
class ViewModel: ObservableObject {
@AppStorage("signInWithAppleUserIdString") var signInWithAppleUserIdString: String = ""
var appleSignInManager = AppleSignInManager()
func signInWithApple() async {
let appleIdString = await appleSignInManager.signIn()
if let appleIdString {
signInWithAppleUserIdString = appleIdString
} else {
print("ERROR: USER NOT SIGNED IN WITH APPLE")
}
}
func signOutFromApple() {
signInWithAppleUserIdString = ""
}
}
Example Class
I called it AppleSignInManager because it's simple, but that's roughly how you could create it.
import AuthenticationServices
import Combine
import SwiftUI
@MainActor
final class AppleSignInManager: NSObject, ObservableObject,
ASAuthorizationControllerDelegate,
ASAuthorizationControllerPresentationContextProviding {
private var continuation: CheckedContinuation<String?, Never>?
override init() {
super.init()
}
func signIn() async -> String? {
return await withCheckedContinuation { continuation in
self.continuation = continuation
startAuthorization()
}
}
private func startAuthorization() {
let provider = ASAuthorizationAppleIDProvider()
let request = provider.createRequest()
request.requestedScopes = []
let controller = ASAuthorizationController(authorizationRequests: [request])
controller.delegate = self
controller.presentationContextProvider = self
controller.performRequests()
}
func presentationAnchor(for controller: ASAuthorizationController) -> ASPresentationAnchor {
if let keyWindow = UIApplication.shared.connectedScenes
.compactMap({ $0 as? UIWindowScene })
.flatMap({ $0.windows })
.first(where: { $0.isKeyWindow }) {
return keyWindow
}
if let windowScene = UIApplication.shared.connectedScenes
.compactMap({ $0 as? UIWindowScene })
.first {
return ASPresentationAnchor(windowScene: windowScene)
}
fatalError("NO WINDOW SCENE FOUND")
}
func authorizationController(controller: ASAuthorizationController,
didCompleteWithAuthorization authorization: ASAuthorization) {
if let credential = authorization.credential as? ASAuthorizationAppleIDCredential {
let userId = credential.user
continuation?.resume(returning: userId)
continuation = nil
} else {
continuation?.resume(returning: nil)
continuation = nil
}
}
func authorizationController(controller: ASAuthorizationController,
didCompleteWithError error: Error) {
print("ERROR:", error.localizedDescription)
continuation?.resume(returning: nil)
continuation = nil
}
}
Explanation
My ViewModel stores the user ID returned by the “Sign in with Apple” authorization process and is directly linked to the custom ASAuthorizationControllerDelegate, which provides the result.
If you take any DAG, and draw it in such a way that the leaves (the nodes without outgoing edges) are placed all at the bottom of the diagram, with their parent(s) above them, ... then how could it differ from the diagram you have shown? Is there any DAG that would not represent what you expect?
Just stop the project & re run. as Its using Method Channels under the hood & they are written in kotlin/swift based on platform, they do not have hot reload/hot restart feature as flutter.
so you have to just stop & re-run the project also do flutter clean and flutter pub get.
https://www.ccleaner.com/recuva/download
This is a free tool I have used for many of my USB Drives. It's free & works well so give it a try!.
Yeah its happened to me my app successfully completed 12 testers for 14 days but i got an email exactly like this . its happening because the 12 testers are not testing the app daily thats why its happening . then i have uploaded my app in a app called closed test pro . i got 12 testers from this app free and the 12 testers are tested my app everyday . this app has daily reminders which helps users test the installed apps once per day for 14 days .
Your observation is wrong. There is no such thing. The behavior is not expected and not unexpected, it is what you choose.
This is not about buttons. And it is not related to your Web page/application.
This is just the Download option you have to set up for each profile on a browser. You set it the way you want for your Chrome profile and haven't done that in Firefox, that's all.
Please adjust the Download option to your liking where you need it. Then you will see the proper browser behavior.
This is a classic limitation of the GIF format—unlike PNG, GIF only supports a single color as transparent and does not have an alpha channel with varying opacity. This means smooth transparent edges and anti-aliased shadows are basically impossible in GIFs, which can cause jagged edges when placed on different backgrounds.
A modern approach is to use WebP instead of GIF. WebP supports full alpha transparency with variable opacity and animation, plus better compression. It’s now widely supported across browsers and platforms, making it a great alternative to GIF for animated images with smooth transparency.
If you need to convert between these formats or make stickers, check out my app WebPeek which offers efficient GIF-to-WebP and WebP-to-GIF conversions, maintaining transparency and animation as much as possible.
I was told by a developer that yes, AS clauses can be used to change the name of the table as it is stored locally, but no, PowerSync can not sync from a view. In the future views will hopefully be unnecessary thanks to Sync Streams which allow more complex queries.
What is the point of the pixelColor variable in this code? You never use it.
Do you mean that you want to automatically generate the form code on demand based on which fields are in your database, without having to write the form code directly?
I cannot access to this dataset through the link, could you please tell me hw to access the dataset? I also need this dataset for my research and I will also check this issue
IMHO I believe it’s more safe and clear to set work variables with initial values (by calling an specific paragraph/section) and this way assure the desired behaviour. This avoid unexpected results caused by external factors like changing parameters or settings in the target environment. Of course, you need to be aware about reentrant programs that must retain values between executions, but I think that is not the scenario you have described.
thanks for this perticular discussion on it!
There is a flutter package. I am the author of it. Idle logout, it does this.
Thanks for the replies! A few of the linked threads have solutions to this for the question of which cpu architecture we're in. In this situation, all of the nodes are x86_64. Actually, the sysadmin didn't realize that they were heterogeneous until we hit this issue.
I haven't actually used a ton of cpu optimizations, just -O3. But that does turn on a ton of other things, and probably dialing it back to -O2 or -O1 would partly solve this. The code I'm running takes weeks, though, so I'm relucantant to do that.
The workaround I've been using is to have a shell script attempt to run the code. If it fails, recompile a local copy and use that. This is partly in-line with some suggestions above, although in some cases I know for sure that it's running at sub-optimal speed.
Pepijn Kramer's idea to use a shell script to query for the exact cpu might be an improvement --- and then trigger the recompile if it doesn't match.
If you find any other alternatives, please update us with the details. Thank you in advance!
Thanks for the reply.
The steps of the process I understand I am having difficulty though figuring out:
a. automatically setup metrics for a new prompt. Every version release cannot automatically add a new prompt metric to the db as there could be other changes in the code unrelated to the prompt - or even changes to a different prompt. Even if each prompt change is separated into a different module so that it has it's own version still if the prompts are all in the same repository when the code is released all the module versions will be updated. So my question is really is there a way to automate releases to update versions and therefore metrics only for prompts that have been changed.
b. how to easily retrieve and rerun previous versions of the prompt quickly and efficiently when other commits and changes to the code might have been made since the version being rolled back to.
it working for me
You can resolve by running the install_tools.bat in the directory of your nodejs.
There are several tools for this.
Besides manual pgloader or ora2pg approaches, there are also online converters such as mysqltopostgre.com, which can convert the dump and generate a PostgreSQL-ready script.
Depending on the complexity of your schema, it might help.
You should specify whether you want your env variable to be server side or client side in nextjs the way you wrote it's server side if you want your env variable to be accessable in browser you should add
NEXT_PUBLIC_ prefix to your variable name this way it's accessable both in client and server if not it's server only for protecting sensitive info
so in your case your variable should be:
NEXT_PUBLIC_GOOGLE_CLIENT_ID
Nextjs Docs About env variables
Writing after the ret address slot is causing the segfault because you are writing the address of touch2 in the caller's frame. The address of touch2 should be pushed to the stack from inside the buffer so that it ends up at where address of buff[0] was i.e the ret address slot.
Modern mongodb drivers use unified topology which automatically detects if a replica set is in use. Try using the 'diectConnection' option: https://www.mongodb.com/docs/manual/reference/connection-string-options/#mongodb-urioption-urioption.directConnection
You can configure Intellisense to use a specific C Standard (e.g. "cStandard": "gnu23" -> C23 + GNU extensions). As a hint: use the same standard for Intellisense and the compiler (e.g. -std=gnu23). Also take a look at: C++ extension settings reference.
It sounds like you'd need to copy the activities & code from app2 into app1 and update the app1 code to call the activities & code from app2..
function get2cols (rng,x,y){
// Returns two particular columns of a two dimensional array or range
var rtn = [];
rng.forEach((item) => {
rtn.push([item[x],item[y]]);
});
return rtn;
}
I cannot delete it. But now I have asked the question again as "normal". See Calling static function from inline function in C
in folder "C:\Users\XXXX\AppData\Local\Android\Sdk", there are two subfolders
--- .downloadIntermediates
--- .temp
you need to delete the temporary contents inside before you are able to re-download "NDK (Side by side) 27.0.12077973" package", make sure that the network is in good state.
I don't think the question is related to C++ itself, I only added the tag because I'm using C++. Also, the code I posted works, but, as I explained, I don't know if it is guaranteed to work in all cases. It's SQLite's documentation what says you can't put WITHs inside TRIGGERs, but it does not specify any languages, so I suppose that it happens in all languages.
So it turns out the code runs fine and VS Code's pylance extension wrongly showed a hint underline saying the module would not be imported, and I trusted it. Sorry for wasting everyone's time.
I'm trying to append some divs in a main div using jquery. I take some input values and loop for certain values. My code is that. I take 4 values form a form. "name" and "type" are text, while "first" and "last" are two numbers, let's say: 1 and 10. I would to loop for i=first and i<last this thing that I append on a div named "result". But at the moment nothing happens. Neither errors in the console.
$(document).ready(function(){
$("#btn").click(function(){
var name = $("#name").val();
var type = $("#type").val();
var first = $("#first").val();
var last = $("#last").val();
for(i=first; i<last; i++){
$("#result").append("<div class='myClass'><h3>" + name + "</h3><h3>" + type + "</h3><h3>" + i + "</h3></div>");
}
});
});
I will delete this one and ask again
Vargula offers a markup-style syntax to customize terminal texts. If you want the text to appear blue:
import vargula as vg
vg.write("<blue>Here's a blue text!</blue>")
At the same time, you can replace <blue>...</blue> with <#ffffff>...</#ffffff> or any hexadecimal code if you want a specific color shade.
sir, i understand and will be careful in framing the question next time. but, do you have my answer ?
For handling multiple clients in a C server, threads, forks, and non-blocking I/O (select/poll/epoll) each have trade-offs: forking is simple and robust but heavy due to process overhead; threading is lighter and easier to share state but requires careful synchronization; non-blocking I/O with select/poll/epoll scales best, avoids per-client stacks, and is the common choice for high-performance servers, though it’s more complex to implement. For a small HTTP library, the usual recommendation is non-blocking sockets with select/poll (or epoll on Linux) because it’s efficient, avoids threading complexity, and works well for many simultaneous clients while keeping the code relatively simple.
_cache.Remove(cacheKeyForDeviceStatus);
IMemoryCache is not like cookies so we can just remove the cache because cache saved in server side not client side
after deleting cache if the was no cache the func will hit the db
i just figured out a way around this and i thought id share this with yall. even tho when searching for a track itself the preview url will return null, if you use the https://api.spotify.com/v1/search endpoint and find your song there it will have a preview url cause thats what spotify actually uses to serve the preview url in the app and web. hope that helps.
I just checked Apple's documentation and found this note:
"If you upload a build and it remains in the Processing state for more than 24 hours, there may be an issue. To resolve the issue, submit a Feedback Assistant ticket or contact us."
Document URL: https://developer.apple.com/help/app-store-connect/manage-builds/view-builds-and-metadata#view-build-upload-status
I've already submitted a Feedback Assistant ticket myself.
However, since the US is celebrating Thanksgiving right now, I don't expect anyone to look at it until after the holiday.
I suspect this is a common issue that Apple needs to address for specific apps. So, if you run into this problem, don't just wait for it to resolve itself—be proactive and report it to Apple immediately.