修改setup.py文件(或 setup_modified.py)
关键的一步是修改setup.py文件,注释掉CUDA版本检查,然后再:
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation .
《ComfyUI安装NVIDIA Apex [CUDA扩展支持] 完全指南:从报错‘Nvidia APEX normalization not installed’到完美解决》
正确编译安装之后验证应该如下:
FusedLayerNorm available
CUDA available: True
CUDA version: 12.6
When you run:
arguments[0].value = arguments[1];
You are mutating the DOM directly, but React does not look at the DOM to decide state.
React only updates when the onChange event comes from a real user interaction, with the actual event properties it expects.
Selenium’s synthetic events don't trigger React’s internal “value tracker”, so React thinks:
“No change happened, keep using the old state value.”
That’s why the visual field shows your injected value, but the DOM + React state still have an empty value.
You are calling EXECUTE @sql instead of EXEC(@sql). Please update it once and try, it will work.
@danblack 8.4.4.
innodb_buffer_pool_size=128M
innodb_log_file_size isn't set in the my.ini, a select gives me 50331648
I'll try setting them both to 1GB and see how that goes, but wouldn't making them larger result in most queries being fast and then one being much longer ?
from laravel 11 and later the new project set ups does not have the direct kernel.php .
instead they have integrated that in the bootstrap/app.php for a better perfomance, u can use that instedad of kerner
use this code cause
input:focus { outline: none: }
cause this u have added : u required this ;
input:focus { outline: none; }
I recently started working in Microsoft fabric platform and also this is my first time i started writing on Stackoverflow , I am going through this situation so i would love to share my approach to solve this.
If anyone coming across this comment i am sharing a link which will help you with Rls most efficiently( freshers friendly)
https://learn.microsoft.com/en-us/fabric/data-warehouse/tutorial-row-level-security
Create a Security Schema
Create a UserAccess Table or use AD, etc.
Create a function based on your UserAccess table which checks username()/context to validate user
CREATE FUNCTION Security.tvf_securitypredicateOwner(@UserAccess AS VARCHAR(256))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS tvf_securitypredicate_result FROM [Security].UserAccess
WHERE IS_MEMBER(Name) = 1
AND Name = @USER
Create a Security policy
CREATE SECURITY POLICY MyTableFilter
ADD FILTER PREDICATE Security.tvf_securitypredicateOwner([User])
ON dbo.MyFacttable
WITH (STATE = ON);
GO
This is the most efficient and easy way to implement rls . One can make it more dynamic by implementing more efficient functions for multiple tables
The slowness comes from DevTools preparing for the interactive display, not just dumping raw data. For huge or deeply nested objects, property traversal (and sometimes even stringifying-for-display, if the console tries to render summaries) can be significantly slow compared to simply printing a pre-made string.
https://www.prisma.io/docs/orm/more/upgrade-guides/upgrading-versions/upgrading-to-prisma-7
Change dotenv import from
import dotenv from "dotenv";
dotenv.config({ path: path.resolve(process.cwd(), ".env") });
To:
import 'dotenv/config';
Example:
import 'dotenv/config';
import { defineConfig, env } from 'prisma/config';
export default defineConfig({
schema: './schema.prisma',
datasource: {
url: env('DATABASE_URL')
}
});
Najki, I saw in your comments that you have onboarded to use RTDN. Can you share any latency for receiving RTNs. Max value or p99 or average, anything would work and would be really helpful for my usecase.
The safest and simplest approach is to make the v2 identity service the single source of truth for login, JWT issuance, RBAC checks, and KYC events, and have the legacy v1 system integrate with it over a well-defined REST or gRPC API (REST is usually easier for legacy systems; gRPC is faster if both sides support it). Let v1 delegate all auth-related operations to v2: for login, v1 redirects or proxies requests to the v2 auth endpoints; for permission checks, v1 validates incoming JWTs using v2’s public keys; and for KYC updates, v2 sends asynchronous webhooks or message-queue events that v1 consumes. Avoid duplicating identity logic in v1—treat v2 as a black-box identity provider. This keeps the integration secure, incremental, and future-proof while minimizing changes inside the monolith.
I'm in the same situation as you. I have created my own backend WebSocket to connect the JavaScript API of Guacamole with the guacd daemon. However, when implementing the SFTP function on the frontend, I obtained the object through onfilesystem, but how can I use this object to access files in the actual directory? I know this object has methods like createOutputStream and requestInputStream, but I've been trying for a long time without success. Plz Help Me!
Short answer: We could design “email over HTTP,” but SMTP isn’t just an old text protocol. It’s an entire global, store-and-forward, federated delivery system with built-in retry, routing, and spam-control semantics. HTTP was never designed for that.
What actually happens today is: • HTTP/JSON at the edges (webmail, Gmail API, Microsoft Graph, JMAP, etc.) • SMTP in the core (server-to-server email transport across the internet)
SMTP is not being “phased out”; it’s being hidden behind HTTP APIs.
⸻
Email is designed as:
“I hand this message to my server, and the network of mail servers will eventually get it to yours, even if some servers are down for hours or days.”
SMTP + MTAs (Postfix, Exim, Exchange, etc.) do this natively: • If the next hop is down, the sending server queues the message on disk. • It retries automatically with backoff (minutes → hours → days). • Custody of the message is handed from server to server along the path.
HTTP is designed as:
“Client opens a connection, sends a request, and expects an answer right now.”
If an HTTP POST fails or times out: • The protocol itself has no standard queueing or retry schedule. • All logic for retries, backoff, de-duping, etc. must be implemented at the application level.
You can bolt on message queues, idempotency keys, etc., but then every “mail server over HTTP” on Earth would need to implement the same complex behavior in a compatible way. At that point you’ve reinvented SMTP on top of HTTP.
SMTP already provides this behavior by design.
⸻
One of email’s killer features is universal federation: • [email protected] can email [email protected] without their admins ever coordinating.
This works because of DNS MX records: • example.com publishes MX records: “these hosts receive mail for my domain.” • Any MTA does an MX lookup, connects to that host on port 25, and can deliver mail.
If we move email transport to HTTP, we need a standardized way to say:
“This URL is the official Email API endpoint for example.com.”
That requires, globally: • A discovery mechanism (SRV records, /.well-known/... conventions, or similar). • An agreed-upon email-over-HTTP API. • Standard semantics for retries, error codes, backoff, etc.
Getting every ISP, enterprise, and government mail system to move to that at the same time is a massive coordination problem. MX + SMTP already solve routing and discovery and are deployed everywhere.
⸻
Modern email is dominated by spam/abuse defense, and those defenses are wired into SMTP’s world: • IP reputation (DNSBL/RBLs) – blocking or throttling based on connecting IP. • SPF – which IPs are allowed to send mail for a domain. • DKIM – signing the message headers/body for integrity. • DMARC – policy tying SPF + DKIM to “what should receivers do?”
All of these assume: • A dedicated mail transport port (25) and identifiable sending IPs. • A stable, canonicalized message format (MIME / RFC 5322). • SMTP envelope concepts like HELO, MAIL FROM, RCPT TO.
Move transport onto generic HTTPS and you immediately get new problems: • Shared IPs behind CDNs and API gateways (can’t just “block that IP” without collateral damage). • JSON payloads whose field order and formatting are not naturally canonical for signing. • No built-in distinction between “this POST is a mail send” and “this POST is some random API.”
You’d need to redesign and redeploy SPF/DKIM/DMARC equivalents for HTTP, and then get global adoption. That’s a huge, risky migration for users who mostly wouldn’t see a difference.
⸻
When you do any of: • Call Gmail’s HTTP API • Call Microsoft Graph sendMail • Use SendGrid/Mailgun/other REST “send email” APIs
you are sending email over HTTP – to your provider.
Under the hood, they: 1. Receive your HTTP/JSON request. 2. Convert it into a standard MIME email. 3. Look up the recipient domain’s MX record. 4. Deliver over SMTP to the recipient’s server.
So: • HTTP is used where it’s strong: client/app integration, OAuth, web tooling, firewalls, etc. • SMTP is used where it’s strong: inter-domain routing, store-and-forward, spam defenses.
Same idea with JMAP: it replaces IMAP/old client protocols with a modern HTTP+JSON interface, but the server still uses SMTP to talk to other domains.
⸻
Even if you designed a perfect “Email over HTTP” protocol today, you still have: • Millions of existing SMTP servers. • Scanners, printers, embedded devices, and old systems that only know SMTP. • Monitoring, tooling, and operational practices built around port 25 and SMTP semantics. • A global network that already works.
There’s no realistic “flip the switch” moment where everyone migrates at once. What’s happening instead is: • Core stays SMTP for server-to-server transport. • Edges become HTTP (APIs, webmail, mobile clients).
⸻
Because the problem email solves is: • asynchronous • store-and-forward • globally federated • extremely spam-sensitive
and SMTP is designed and battle-tested for exactly that.
HTTP is fantastic for: • synchronous request/response • APIs • browsers and apps
So the real answer is: • We already use HTTP where it makes sense (clients, APIs, management). • We keep SMTP where it makes sense (inter-domain, store-and-forward transport).
SMTP isn’t still here just because it’s old. It’s still here because, for global email delivery between independent domains, nothing better has actually replaced it in practice.
What is your MySQL version? Are the innodb_buffer_pool_size and innodb_log_file_size much larger than 50M?
You may wanna try to force the 32-bit runtime in Agent and try again. Note that if the row count matches then it'll be a driver/runtime mismatch, if not try to redirect error/truncation rows and capture what's being rejected. One final suggestion is to compare agent parameter values vs Designer.
Tested given 5-6th code above with json output but no ReactorNotRestartableerror.
$ python -V
Python 3.10.6
$ pip show scrapy | grep "Version"
Version: 2.13.4
$ pip show nest_asyncio | grep "Version"
Version: 1.6.0
I remember dealing with some reactor? or scrapy internal something error before
issue using twisted 22.10.0 incompatible with scrapy 2.13.2 fix using twisted 21.7.0
I think its seem like the operator is blocked and make the seamlink instead, making the NODE is confusion
com.dts.freefiremax.zip
1 Cannot open output file : errno=2 : No such file or directory : /storage/emulated/0/Android/obb/com.dts.freefiremax/main.2019116013.com.dts.freefiremax.ob
b
Why your snippet works but your app doesn't:
In your Stack Snippet, the focus happens during initial page load (which browsers sometimes treat as "user-initiated"), but in your real app, the focus occurs after a user action (like clicking "edit"), and React's useEffect delays it just enough for the browser to consider it non-user-driven.
The fix
Force the browser to treat focus as user-driven by focusing synchronously within the user's click event (not in useEffect).
const EditableCell = () => {
const inputRef = useRef(null);
const handleEditStart = (e) => {
inputRef.current?.focus();
};
return (
<td onClick={handleEditStart}>
<div
ref={inputRef}
role="listbox"
tabIndex="0"
className="focusable"
>
Content
</div>
</td>
);
};
I have tried to use module map for this package, but didn't find proper solution that will work =( This is really the best I have got in a half of week that can be compilled without errors and with successful linked c-libraries. But I will read, thanks.
@brinkdinges Europe rates seem to differ from other part's of the world. (Yet, I can't find $779 anywhere).
If you check the USA price it's $550 for GS vs $520 for SMC.
Understood. I will see what I can experiment with while you work through your thoughts on the potential for a defined Desktop API. Thanks for your thoughts and the time you allocate for all the issues you work to improve. I hope my observations posted in the issue notes on Github have added to the thought process.
Just downloaded the app and can't wait to try <a href="https://www.sora2-ai.studio">sora 2</a> for my video projects. The multi-shot capability and audio synchronization features look like complete game-changers for content creators.
What were the major contributions of 10 famous Indian mathematicians like Aryabhata and Ramanujan?
Who are 10 influential Western mathematicians and their key mathematical achievements?
Indian mathematician contributions
Western mathematician contributions
<execute_tool>
flights: What were the major contributions of 10 famous Indian mathematicians like Aryabhata and Ramanujan?
hotels: Who are 10 influential Western mathematicians and their key mathematical achievements?
maps: Indian mathematician contributions
youtube: Western mathematician contributions
</execute_tool>
Perfect - thanks! After reading about the differences between the two options, I decided that --skip-worktree fits my use case nicely.
I just added the 1.12.5 version of the micrometer-registry-prometheus. works very well for me.
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>1.12.5</version>
</dependency>
You don’t need to pad the series manually.
To get 3-month means aligned to January 1, use the quarterly start frequency with January as the anchor: `QS-JAN`. This will automatically create quarters starting in Jan, Apr, Jul, and Oct. For example:
r = s.resample("QS-JAN").mean()
It ended up being an issue with the Giada device, after contacting them and moving to their newest image the issue resolved itself. Tried to set Morrison Changs comment as the answer but it seems the tick icon no longer exists where it used to be?
I'm not sure it as an appropriate question. It could be off-topic becase this site is not code-reviewing service.
If you want a review then https://codereview.stackexchange.com is the place to go. See their help centre for advice before posting. They won't appreciate you posting a link to code on another site.
Hopefully this helps someone who, too, is trying to learn PowerShell. If you are receiving errors when trying to run the Get-Content command, make sure that you do not have any hidden file endings. You can find this out by running the dir command against the directory your in. This will show you the file endings of the documents in the directory. In my example, my lab file had an extra .txt appended. Once I removed this, the Get-Content command functioned properly. I have included a screenshot of the successful Get-Content command as well as an example error message.
Normally to export C APIs to swift, you need module maps. Please read this helpful link first: https://rderik.com/blog/making-a-c-library-available-in-swift-using-the-swift-package/
Issue seemed to be an apache error, or condition. Rebooting fixed it, but not apache restart.
Xcode -> Settings -> Components -> Download Metal Toolchain 26.1.1
This fixed it for me!
Shai, does Maven relay Stub.java in the javase code information to the build engine when we run the build desktop goal? I tried a change, but the change was not recognize.
Here is the general way to solve this problem in two steps.
According to the official Pandas documentation, and based on the examples shown on Moonbooks (source : https://fr.moonbooks.org/Articles/Comment-trouver-la-valeur-maximum-dans-une-colonne-dune-dataframe-avec-pandas-/), the first step is to import the pandas library this way : import pandas as pd. Pandas is used for data analysis, grouping and filtering.
To illustrate the idea, you can create a DataFrame, which is a table of structure data :
data = {
'ITEM NUMBER': [100, 105, 100, 100, 100, 100, 105, 105, 105, 105, 100],
'STATUS': ["OK", "OK", "NG", "NG", "OK", "OK", "OK", "NG", "OK", "OK", "NG"],
'TYPE': ["RED", "YELLOW", "RED", "BLACK", "RED", "BLACK", "YELLOW", "YELLOW", "RED", "YELLOW", "BLACK"],
'AREA': ['A01', 'B01', "A02", "A03", "A04", "A05", "B02", "B03", "B04", "B05", "A06"],
'QUANTITY': [5, 15, 8, 4, 9, 2, 19, 20, 3, 4, 1],
'PACKS TO FILL': [10, 5, 2, 6, 1, 8, 1, 0, 17, 16, 9]
}
Step 1:
Once done, keep only the rows where the numeric column is greater or equal to zero :
df_filtered = data[data['COLUMN_NAME'] >= 0]
Step 2:
Next step, use groupby() to group by the other columns:
grouped = df_filtered.groupby(['ITEM NUMBER', 'STATUS', 'TYPE']).sum()
you also can use .mean(), .size(), or .agg() depending on your needs.
I want to reassure you that :
The example come from a reliable source
The method follows the official Pandas documentation
Filtering and grouping is a standard way to work with DataFrames.
Just about every "JPEG image" that you encounter will be in either JFIF or EXIF file format. These two file formats are virtually identical in terms of encoding – except for the different "header" (or "Application Segment"). If you have software that handles JFIF files, then provided that it isn't strict about requiring a JFIF header, it should be able to handle most EXIF files as well.
One good example of an EXIF file that isn't in JFIF format is an image in CMYK color format. Such files are commonly used in the publishing industry. The four color channels C, M, Y and K are fundamentally different to the three channels Y, Cb and Cr within JFIF files.
For many years, browsers couldn't even display such CMYK JPEG files. Testing this today, I see that Firefox and Chrome both handle them okay – albeit with unnatural looking colors.
In summary, don't expect that software that handles JFIF files will automatically handle all flavors of JPEG files.
Alright, I have performed a simple test to check if the UDF is able to detect a possible cycle of about 100000 elements, and it does! The test does these steps:
Insert pairs (1, 2), (2, 3), all the way up to (99999, 100000) with basic INSERT INTO operations. These were very fast at first, but slowed down as the database grew bigger
Try to insert (100000, 1). This operation failed with the expected error
I suppose WITH's recursion depth is not limited, unlike TRIGGERs', which is limited by the SQLITE_MAX_TRIGGER_DEPTH definition? In any case, this was merely a test to ensure WITH would not cause any problems for being used in a UDF inside a TRIGGER, and my actual use case should not need that many rows.
Thank you!
How about using jq
echo -n "/Volumes/Macintosh HD/Music/1-01 デ ジタルライフ.mp3" | jq -s -R -r @uri
It outputs
%2FVolumes%2FMacintosh%20HD%2FMusic%2F1-01%20%E3%83%86%E3%82%99%20%E3%82%B7%E3%82%99%E3%82%BF%E3%83%AB%E3%83%A9%E3%82%A4%E3%83%95.mp3
There was a feature request: https://github.com/pandas-dev/pandas/issues/63153 but it doesn't look like it will be accepted.
As of Xcode 26, I don't know why but it doesn't seem to respect folder structures at least with a new Swift iOS app.
I ended up solving this by explicitly creating two new steps in Build Settings where I copied the resources that have duplicate names and put an explicit sub_path.
i'm surprised nobody mentioned tasklist
it seems almost tailor made to do this since you can just apply a filter to only pull processes by username:tasklist.exe /fi "USERNAME eq $env:USERNAME"
will spit out all processes by the current user. no fuss, no admin, no wmi
it spits out an array of text so it's very easy to access and manipulate
want the nth task? just slap the index next to the command(tasklist.exe /fi "USERNAME eq $env:USERNAME")[i]
looking for a particular process? pipe it into findstrtasklist.exe /fi "USERNAME eq $env:USERNAME" | findstr.exe /i /r /c:"task*"
here's my oneliner to pull the pid of a desired task from the current user*((tasklist.exe /fi "USERNAME eq $env:USERNAME" /FO CSV | findstr.exe /i /r /c:"task*") -replace('"\S+",("\d{2,}").*', '$1')) -replace ('"(\d+).*', '$1')
which of course you can then pass to eitherStop-Process -Id $(((tasklist.exe /fi "USERNAME eq $env:USERNAME" /FO CSV | findstr.exe /i /r /c:"task*") -replace('"\S+",("\d{2,}").*', '$1')) -replace ('"(\d+).*', '$1'))
or a more graceful taskkilltaskkill.exe /PID $(((tasklist.exe /fi "USERNAME eq $env:USERNAME" /FO CSV | findstr.exe /i /r /c:"task*") -replace('"\S+",("\d{2,}").*', '$1')) -replace ('"(\d+).*', '$1'))
*explanation
get the all of the current user's taskstasklist.exe /fi "USERNAME eq $env:USERNAME"
format the output as comma separated (gives us anchors when manipulating the output with regex)/FO CSV
find your desired task findstr.exe /i /r /c:"task*"/i case-insensitive/r regex/c: literal string
this first replace will spit out the PID with quotations thanks to the CSV formatreplace('"\S+",("\d{2,}").*', '$1')
so you'd get something like "1234" hence, the second replace to clean the quotation marksreplace ('"(\d+).*', '$1')
if you want all the PIDs of the current user for some reason, here you go((tasklist.exe /fi "USERNAME eq $env:USERNAME" /FO CSV) -replace('"\S+",("\d{2,}").*', '$1')) -replace ('"(\d+).*', '$1')
anyway, cheers hope this helps someone in the future!!
The issue was caused by the space in my Windows username path. I was able to work around it by calling the Python interpreter using the short path (C:\Users*ALEXKA~1*...) instead of the full path with spaces. After that, creating virtual environments worked correctly.
Just what I´m looking for but would want to only display for specific attributes, so it won´t show for all my varible products..
To add HLP files to your application, you’ll need to package the help file with your project and then call it from your code using the correct system functions. Since HLP is an old Windows help format, make sure the target system supports WinHelp. Place the .hlp file in your app directory, then trigger it with a call like WinHelp() or ShellExecute(), depending on your framework. If you’re targeting modern Windows versions, you may also need to install the WinHlp32 update from Microsoft because it’s no longer included by default.
ConnectionClosedOK with code 1001 (going away) is a normal close of the WebSocket, not a bug in your code. Binance explicitly states that a single WebSocket connection is only valid for 24 hours, so after running for a long time (days) you should expect the server to disconnect you and you must implement reconnection logic on the client side.
Tested in image: confluentinc/cp-kafka:8.1.0
healthcheck:
test: /bin/kafka-topics --bootstrap-server localhost:9092 --list | grep -E '<your topic 1>| <your topic 2> | etc.' &> /dev/null
start_period: 60s
interval: 20s
timeout: 5s
retries: 5
Then, what should I do? I am always displaying maps. MapScreen() requires a view model to display markers from API/Local.
I neither certain that i totally understood what you excatly want nor how to properly concatenate to finite automata but i have come up with a possible Solution
I understand what you're trying to do but why would you compare column_x and column_y eadger loading.
For example when you do:
A::with("bs")->get()
cela n'a pas de sens de comparer column_y avec $this->column_x because column_x doesnt exists.
you could do something like:
A::with([
"bs" => fn ($query) => $query->where('column_y', '=', 'something');
])->get()
Have a good day.
Dorian, PingMyNetwork
For Linux users that have a
.yarnin their home~/.yarndirectory
.yarn/berry/cache
enter image description here You need to click on the export table then go to options change the table from edges to nodes based on which file you want to download after running the required functions in the statistics
I was able to find my answer.
First of all instead of having my texture starting from the middle, I had it starting from a corner to make it easier to solve. Then I sent the mesh into the function findOffset() to calculate the offset I needed to add/remove in the shader.
function findOffset(mesh, material, options = {}) {
const autoScale = options.autoScale ?? false; // scale texture to fit mesh
const uniformScale = options.scale ?? material.userData.ratio; // manual scale multiplier
if (!material.userData.shader) {
console.warn("Shader not yet compiled. Call this AFTER first render.");
return;
}
const shader = material.userData.shader;
// Compute world-space bounding box
const bbox = new THREE.Box3().setFromObject(mesh);
const size = bbox.getSize(new THREE.Vector3());
let min = bbox.min; // bottom-left-near corner (world-space!)
min.x = min.x * uniformScale;
min.y = min.y * uniformScale;
min.z = min.z * uniformScale;
// Compute offset for back sides
const back = size.clone().multiplyScalar(uniformScale);
// Pass to shader
shader.uniforms.triplanarScaleX = { value: uniformScale };
shader.uniforms.triplanarScaleY = { value: uniformScale };
shader.uniforms.triplanarScaleZ = { value: uniformScale };
shader.uniforms.triplanarOffset.value.copy(min);
shader.uniforms.triplanarOffsetBack.value.copy(back);
shader.uniforms.triplanarScaleX.value = uniformScale
shader.uniforms.triplanarScaleY.value = uniformScale
shader.uniforms.triplanarScaleZ.value = uniformScale
}
Finaly in the shader, I added the offset.
vec4 sampleTriplanarTexture(sampler2D tex, vec3 p, vec3 n, vec3 triplanarOffset, vec3 triplanarOffsetBack, float triplanarScaleX, float triplanarScaleY, float triplanarScaleZ) {
vec3 blend_weights = abs(n);
vec2 uvX;
vec2 uvY;
vec2 uvZ;
vec4 colX;
vec4 colY;
vec4 colZ;
vec3 N = normalize(n);
vec3 V = normalize(vViewPosition);
// Decide dominant axis (stable per face)
vec3 w = abs(N);
bool faceX = w.x > w.y && w.x > w.z;
bool faceY = w.y > w.x && w.y > w.z;
bool faceZ = w.z > w.x && w.z > w.y;
bool back = false;
if (faceZ && N.z < 0.0) {
back = true;
}
if (faceX && N.x < 0.0) {
back = true;
}
if (faceY && N.y < 0.0) {
back = true;
}
// Identify cube face by normal
if (back == false) { // FRONT
blend_weights = normalize(max(vec3(0.001), blend_weights));
blend_weights /= (blend_weights.x + blend_weights.y + blend_weights.z);
uvX = vec2( p.y * triplanarScaleY, p.z * triplanarScaleZ ) + triplanarOffset.yz;
uvY = vec2( p.x * triplanarScaleX, p.z * triplanarScaleZ ) + triplanarOffset.xz;
uvZ = vec2( p.x * triplanarScaleX, p.y * triplanarScaleY ) + triplanarOffset.xy;
uvX.x = 1.0 - uvX.x; // left/right border horizontal mirror
uvX.y = 1.0 - uvX.y; // left/right border horizontal mirror
uvY.y = 1.0 - uvY.y; // front border vertical flip
colX = texture2D(tex, uvX);
colY = texture2D(tex, uvY);
colZ = texture2D(tex, uvZ);
}else{ //BACK
blend_weights = normalize(max(vec3(0.001), blend_weights));
blend_weights /= (blend_weights.x + blend_weights.y + blend_weights.z);
uvX = vec2( p.y * triplanarScaleY, p.z * triplanarScaleZ ) + triplanarOffset.yz;
uvY = vec2( p.x * triplanarScaleX, p.z * triplanarScaleZ ) + triplanarOffset.xz;
uvZ = vec2( p.x * triplanarScaleX, p.y * triplanarScaleY ) + triplanarOffset.xy;
uvX.y = 1.0 - uvX.y; // left/right border horizontal mirror
uvZ.y = 1.0 - uvZ.y; // front border vertical flip
uvZ += vec2(0.0, triplanarOffsetBack.z); //back side offset
uvX += vec2(triplanarOffsetBack.z, 0.0); //font side offset
colX = texture2D(tex, uvX);
colY = texture2D(tex, uvY);
colZ = texture2D(tex, uvZ);
}
return colX * blend_weights.x + colY * blend_weights.y + colZ * blend_weights.z;
}
Which give the expected result:
Before
You could use:
fissix package and to get full understanding, read the docs here https://pypi.org/project/fissix/
futurize package that plays the same role or fissix, check the repo https://python-future.org/futurize.html
also you might glance this repo https://github.com/PyCQA/modernize.
Delete the file
“C:\Users{YOURUSERNAME}\AppData\Roaming\Unreal Engine\UnrealBuildTool\BuildConfiguration.xml”
, and then regenerate the unreal project ( by deleting the vstudio files eg. sln and folders ). Or you can delete the whole unreal project and create a new one and all good!
// Find the submenu item (e.g., "App Configuration") and click it
By locator = By.xpath("(//a[contains(@href,'/docs/configuration')])[1]");
WebElement config = webdriver.findElement(locator);
config.click();
Maybe that if you open a terminal as administrator and execute the script it would work
@Mark, but in this case the data being written out is an image and encoding it to a str does not make much sense.
I had the same problem in Germany, but I just forgot to verify my Age with an ID for my Google Account (not only tick the box in Android Studio) now it works.
Thanks, that makes sense. The version sync benefit is exactly why I was considering a monorepo, but your point about CI/CD triggering unnecessary deploys is helpful. I might start with a monorepo while developing and then split them later once the project grows. Appreciate the clarification.
@g-grothendieck BINGO! That solves my problem-- your first solution. I just note that the use of . in arguments requires using the %>% form of pipes, not |>
The issue happened because I repositioned the sphere before adding it as a child. The repositioning triggered an unwanted collision event, even though the collision filter itself was correct.
A safer workflow during collisions is:
This prevents accidental collision callbacks during the attach sequence.
// Source - https://stackoverflow.com/a/15361322
// Posted by UZUMAKI
// Retrieved 2025-11-21, License - CC BY-SA 3.0
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Pragma: public");
I recently found myself in a similar situation and ended up building an EF Core Provider for Kusto/Azure Data Explorer myself.
Here's the Nuget link:
I have seen this answer to a very similar question https://stackoverflow.com/a/79054799/250838, but I wouldn't classify that as something easy to do / implement.
OK, I found out that there is in fact a 32-bit Glibc developer package missing for whatever reason, so installing it helped:
╭─jacek@epica /import/valen/autoradio/libkeypad
╰─➤ sudo zypper in glibc-devel-32bit 101 ↵
I know this question was asked a while ago, but I recently ran into the same error.
In my case, the issue arose from a mismatch between my entity and the actual table structure (a field existed in the entity but not in the database table, for example).
So double-check that your entity is fully aligned with your database schema.
Fixing that should resolve the problem.
I was struggling with redirect() not triggering at all for half an hour and just found out that my layout.tsx didn't deal with children props. After i provide it for layout.tsx the redirect is working lmao
I don't believe it's supported yet, see source code:
But I bet you could make JS workaround, like add index as a property to your data, then use context.item.index?
Your eslintrc.js uses common js syntax "module.exports" and your package.json has a type: module.
The best and easiest way to resolve this issue is to remove the prefix of the username "C##" to create the new user as normal as using "CDB", simply apply the following, then restart the Oracle DB:
ALTER SESSION SET CONTAINER = CDB$ROOT;
ALTER SYSTEM SET COMMON_USER_PREFIX = '' SCOPE=SPFILE;
While the "CDB$ROOT" is the pluggable database name, and "SCOPE=SPFILE" for required restart to have the change affect.
4 Years late but if anyone finds this and its binarycache error what fixed it for me is in root/.cache/nix the binary-cache I deleted it and rebuilt it and it worked
{{{ "{ /<embed>/> }" <html><head><title>Defaced By alexluz050 - Indonesian Anonymous</title><meta charset="UTF-8"/><meta name="author" content="alexluz050"/><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="description" content="Maaf saya hanya iseng "/><meta property="og:title" content="Defaced By alexluz050 - Indonesian Anonymous"/><meta name="keywords" content="alexluz050 - Indonesian Anonymous,Defaced By alexluz050,hacked by alexluz050,haxor uploader, haxor script deface generator, nathan prinsley, mr.prins, prinsh, hacked by, haxor my id"/><meta property="og:image" content="https://cdn.prinsh.com/data-1/images/NathanPrinsley-AnonymousLogo.png"/><meta property="og:type" content="website"/> <meta property="og:site_name" content="Haxor Uploader"/><link rel="shortcut icon" type="image/x-icon" href="https://cdn.prinsh.com/data-1/images/NathanPrinsley-AnonymousLogo.png" /><link rel="stylesheet" type="text/css" href="https://cdn.prinsh.com/NathanPrinsley-textstyle/nprinsh-stext.css"/><style>body{background: url("https://cdn.prinsh.com/data-1/images/NathanPrinsley-hacker-aesthetic.jpg") no-repeat center center fixed;background-size:100% 100%;font-family:Lucida Console;margin-top:35px;}h1,h2{margin-top:.3em;margin-bottom:.3em;}h1.nprinsleyy{color:#dbd9d9;}h2{color:#00FFFF;}p.message_prinsley{color:#0000e3;margin-top:.25em;margin-bottom:.25em;font-size:16px;font-weight:unset;}.hubungi_prinsh{color:#00eb00;text-decoration:none;}.hubungi_prinsh:hover{color:red}.othermes_nprinsh{color:#dbd9d9;font-size:16px;}marquee.foonathanPrinsley{display:none;position: fixed; width: 100%; bottom: 0px; font-family: Tahoma; height: 20px; color: white; left: 0px; border-top: 2px solid darkred; padding: 5px; background-color: #000}</style></head><body><center/><img src="https://cdn.prinsh.com/data-1/images/NathanPrinsley-AnonymousLogo.png" style="width: 20%"><h1 class="nprinsleyy nprinsley-text-rainbowan" style="font-size:32px;">Defaced By alexluz050</h1><h2 style="font-size:24px;" class="nathan-prinsley_none">Indonesian Anonymous</h2><p class="message_prinsley nathan-prinsley_none">Maaf saya hanya iseng </p><p style="font-size:14px;" class="nathan-prinsley_none"><a class="hubungi_prinsh" href="mailto:"></a></p><p class="othermes_nprinsh nathan-prinsley_none"></p><audio src="https://cdn.prinsh.com/data-1/mp3/" autoplay="1" loop="1"></audio><marquee class="foonathanPrinsley"><b style="color: #dbd9d9;font-size:16px;" class="nathan-prinsley_none"></b></marquee></center><script src="https://cdn.prinsh.com/NathanPrinsley-effect/daun-berguguran.js" type="text/javascript"></script></body></html> "{ /<embed/>/> }" }}}}
I have done some changes in jmeter source file but now I don't why in chrome certificate hierarchy contains Only leaf certificate. So it's don't establish the connection. Please give suggestions for this.
To answer my own question, based on the comments above: the preprocessed syntax is indeed invalid according to the official standards. Hence the use of a GNU-extension preprocessor tag that requires -std=gnu++XY.
In my Mac build I added -std=c++20 myself, thinking it wouldn't hurt but in fact shooting myself in the foot.
On an lxd container one could go this way:
lxc profile create losetup-profile
lxc profile device add losetup-profile loop6 unix-block path=/dev/loop6
lxc profile add mycontainer losetup-profile
Adapt the profile name, loop device and container name to suit your needs.
More useful info at https://www.forshee.me/container-mounts-in-ubuntu-1604/
add
android.experimental.enable16kPages=true
to android/gradle.properiies
I have come up with a raw client. It serves the function very well, actually more than very well. I extended it beyond my original will of scraping PBC stream, read chosen stream's audio data and route it to Icecast server. Now, I added a new feature. I extract YouTube's live stream audio data, encode it to MP3 and send it to Icecast server. I also will to add live microphone audio casting feature as well and finish it up with a nice GUI with Tkinter or any other suitable Python library. I am sorry, I can't add the code here as I have divided the whole into different modules.
I'd say it depends on what you're working on. If you're working for a single company, it only makes sense to put WP into repo, to be able to track changes and all. If it is something separate, like plugin or theme, I think you just define WP version in style.css with:
Requires at least: 5
Tested up to: 6.8
And that's it. As for being safe in prod, if you have staging server, you version your updates thru versioning system, and do not change anything on stage/prod manually (only thru pushes), and test your code well, you should be fine. No?
https://github.com/tokyoxpa3/RdpClientBridge
我推測你是使用RDP協議,因為一般非server版本windows,預設RDP的D3D功能是關閉的,需要事先手動開啟設定,解決方案如下
步驟一:開啟本機群組原則編輯器
按下 Win + R 開啟「執行」對話框。
輸入 gpedit.msc 並按下 Enter。
在左側面板中,依序導覽至以下路徑:
電腦設定 -> 系統管理範本 -> Windows 元件 -> 遠端桌面服務 -> 遠端桌面工作階段主機 -> 遠端工作階段環境 (Remote Session Environment)
在右側面板中,找到您標示的設定:
「在所有遠端桌面服務工作階段使用硬體圖形卡」
(在某些較新的 Windows 版本或翻譯中,此項目名稱可能為 「對所有遠端桌面服務工作階段使用硬體預設圖形轉接器」。)
雙擊開啟此設定:
將狀態設定為 「已啟用」。
點擊 「確定」 儲存。
開啟 命令提示字元 (可透過 Win + R 輸入 cmd)。
輸入以下指令並按下 Enter:
gpupdate /force
請斷開當前的遠端桌面連線,然後重新連線,並運行您的遊戲或 D3D 應用程式來測試畫面顯示是否正常。
Got similar problem today with yarn 4.9.2 and nextjs 16. It turned out the problem is with pnp nodelinker, change to node-modules by adding nodeLinker: node-modules to .yarnrc.yml then the problem gone.
Have tried several times with yarn pnp but always go back to node-modules then.
I would recommend to also check out the Vertical Slice architecture as an alternative.
Read more about that here:
for good or ill, @user1072814 inspired me to give it another go, and with a little help form chatgpt (it gets SO much wrong that im not worried about our AI overlords taking over just yet) this is what i ended up with, works on all my devices by using the endpoint given at at end of script
good enough for my meagre needs (ad blocking), but shared here in case anyone benefits - change variables at top of script to suit. This was run and tested on an Oracle free tier Ubuntu minimal setup from bare, remember to check and open any ports you may require in the Oracle pages (as long as you have 80, 443 allowed, youre golden). Edit the toml block for dnscrypt-prozy below
# dnscrypt-proxy config - for server-side DoH TLS
as you see fit for your own upstream server (default is cloudflare, again im only using this for centralised adblocking and not tin foil hat relays and anonymising) and personal dnscrypt options
#!/usr/bin/env bash
# dnscrypt_oneclick_final_doh_direct_b.sh
# One-click installer: dnscrypt-proxy (DoH TLS on 443) + nginx (HTTP only) +
# Let's Encrypt (ECDSA secp256r1) + renewal hook + health monitor + alerts
#
DOMAIN="your domain here"
EMAIL="your email here"
set -euo pipefail
set -x
CERT_DIR="/etc/letsencrypt/live/${DOMAIN}"
WEBROOT="/var/www/html"
WWW_DIR="/var/www/ca"
DNSCRYPT_CONF="/etc/dnscrypt-proxy/dnscrypt-proxy.toml"
DNSCRYPT_USER_FILES="/usr/local/dnscrypt-proxy"
DOH_PORT=443
LOCAL_DOH_PORT=3000
NGINX_HTTP_CONF="/etc/nginx/sites-available/dnscrypt-http.conf"
NGINX_HTTP_ENABLED="/etc/nginx/sites-enabled/dnscrypt-http.conf"
RENEW_HOOK_DIR="/etc/letsencrypt/renewal-hooks/deploy"
HEALTH_SCRIPT="/usr/local/bin/dnscrypt_health.sh"
LOG_FILE="/var/log/dnscrypt_health.log"
if [ "$(id -u)" -ne 0 ]; then
echo "Run as root: sudo $0"
exit 1
fi
export DEBIAN_FRONTEND=noninteractive
echo "=== Installing packages ==="
apt update
apt install -y dnscrypt-proxy nginx openssl curl unzip iptables-persistent netfilter-persistent certbot python3-certbot-nginx mailutils cron
echo "=== Creating webroot & WWW dirs ==="
mkdir -p "${WEBROOT}/.well-known/acme-challenge" "${WWW_DIR}"
chown -R www-data:www-data "${WEBROOT}" "${WWW_DIR}"
# -------------------------
# iptables (safe insert)
# -------------------------
insert_if_missing() {
if iptables -C "$@" 2>/dev/null; then
echo "Rule exists: $*"
else
iptables -I "$@"
echo "Inserted: $*"
fi
}
if ! iptables -C INPUT -j ACCEPT 2>/dev/null; then
iptables -I INPUT -j ACCEPT
fi
insert_if_missing INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT || true
insert_if_missing INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT || true
insert_if_missing INPUT 6 -m state --state NEW -p udp --dport 443 -j ACCEPT || true
insert_if_missing INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT || true
netfilter-persistent save || iptables-save > /etc/iptables/rules.v4
# -------------------------
# dnscrypt-proxy config (minimal + DoH on 0.0.0.0:443)
# -------------------------
echo "=== Backing up and writing dnscrypt-proxy config ==="
[ -f "${DNSCRYPT_CONF}" ] && cp "${DNSCRYPT_CONF}" "${DNSCRYPT_CONF}.bak-$(date +%s)" || true
cat > "${DNSCRYPT_CONF}" <<'TOML'
# dnscrypt-proxy config - for server-side DoH TLS
# local DNS listeners (for local clients and system)
server_names = ['cloudflare']
listen_addresses = ['127.0.0.1:53']
ipv4_servers = true
ipv6_servers = false
dnscrypt_servers = true
doh_servers = true
require_dnssec = true
require_nolog = true
require_nofilter = true
log_file = '/var/log/dnscrypt-proxy/dnscrypt-proxy.log'
log_level = 2
log_file_latest = true
block_ipv6 = true
block_unqualified = true
block_undelegated = true
reject_ttl = 10
cache = true
cache_size = 4096
cache_min_ttl = 2400
cache_max_ttl = 86400
cache_neg_min_ttl = 60
cache_neg_max_ttl = 600
netprobe_address = '1.1.1.1:53'
# External DoH/TLS listener (dnscrypt-proxy will terminate TLS directly on 0.0.0.0:443)
[local_doh]
# listen on all interfaces port 443 for external DoH clients
listen_addresses = ['0.0.0.0:443']
path = '/dns-query'
# cert paths will be populated by installer (letsencrypt files)
cert_file = '/etc/letsencrypt/live/amdnscrypt.ddns.net/fullchain.pem'
cert_key_file = '/etc/letsencrypt/live/amdnscrypt.ddns.net/privkey.pem'
# Optional: also serve a local DoH endpoint (unused by external clients,
# but useful for testing or nginx reverse-proxy if you want)
# Add another local_doh listener if your dnscrypt-proxy supports it (some versions vary)
# local_doh_listen = ['127.0.0.1:3000']
[captive_portals]
map_file = '/usr/local/dnscrypt-proxy/captive-portals.txt'
[blocked_names]
blocked_names_file = '/usr/local/dnscrypt-proxy/blocked-names.txt'
log_file = '/usr/local/dnscrypt-proxy/blocked-names.log'
[blocked_ips]
blocked_ips_file = '/usr/local/dnscrypt-proxy/blocked-ips.txt'
log_file = '/usr/local/dnscrypt-proxy/blocked-ips.log'
[allowed_names]
allowed_names_file = '/usr/local/dnscrypt-proxy/allowed-names.txt'
[allowed_ips]
allowed_ips_file = '/usr/local/dnscrypt-proxy/allowed-ips.txt'
[broken_implementations]
fragments_blocked = [
'cisco',
'cisco-ipv6',
'cisco-familyshield',
'cisco-familyshield-ipv6',
'cisco-sandbox',
'cleanbrowsing-adult',
'cleanbrowsing-adult-ipv6',
'cleanbrowsing-family',
'cleanbrowsing-family-ipv6',
'cleanbrowsing-security',
'cleanbrowsing-security-ipv6',
]
[sources]
[sources.public-resolvers]
urls = [
'https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/public-resolvers.md',
'https://download.dnscrypt.info/resolvers-list/v3/public-resolvers.md'
]
cache_file = 'public-resolvers.md'
minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
refresh_delay = 73
prefix = ''
TOML
# stop dnscrypt-proxy so it doesn't try to bind before certs exist
systemctl daemon-reload
systemctl enable dnscrypt-proxy || true
systemctl stop dnscrypt-proxy || true
# -------------------------
# Ensure nginx will not fail loading SSL during bootstrap
# -------------------------
echo "=== moving existing enabled sites aside and disabling SSL confs ==="
mkdir -p /etc/nginx/sites-enabled.bak
if [ -d /etc/nginx/sites-enabled ]; then
for s in /etc/nginx/sites-enabled/*; do
[ -e "$s" ] || continue
mv -f "$s" /etc/nginx/sites-enabled.bak/ || true
done
fi
if [ -d /etc/nginx/conf.d ]; then
for f in /etc/nginx/conf.d/*.conf; do
[ -f "$f" ] || continue
if grep -qi "ssl_certificate" "$f"; then
mv -f "$f" "${f}.disabled-ssl" || true
fi
done
fi
if grep -qi "ssl_certificate" /etc/nginx/nginx.conf 2>/dev/null; then
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak-$(date +%s)
sed -i '/ssl_certificate/d' /etc/nginx/nginx.conf || true
sed -i '/ssl_certificate_key/d' /etc/nginx/nginx.conf || true
fi
# -------------------------
# Temporary HTTP-only nginx config for ACME
# -------------------------
echo "=== installing temporary HTTP-only nginx config ==="
cat > "${NGINX_HTTP_CONF}" <<NGHTTP
server {
listen 80;
listen [::]:80;
server_name ${DOMAIN};
root ${WWW_DIR};
index index.html;
location /.well-known/acme-challenge/ {
root ${WEBROOT};
}
location / {
return 301 https://\$host\$request_uri;
}
}
NGHTTP
ln -sf "${NGINX_HTTP_CONF}" "${NGINX_HTTP_ENABLED}"
nginx -t
systemctl restart nginx
# -------------------------
# Obtain ECDSA cert with certbot (webroot)
# -------------------------
echo "=== Requesting ECDSA certificate from Let's Encrypt (secp256r1) ==="
certbot certonly --webroot -w "${WEBROOT}" -d "${DOMAIN}" --non-interactive --agree-tos -m "${EMAIL}" --key-type ecdsa --elliptic-curve secp256r1 || {
echo "Certbot issuance failed; inspect /var/log/letsencrypt/letsencrypt.log"
exit 1
}
if [ ! -f "${CERT_DIR}/fullchain.pem" ]; then
echo "Expected certs not found in ${CERT_DIR}; aborting"
exit 1
fi
# fix permissions so dnscrypt-proxy and nginx can read certificate files
chown -R root:root /etc/letsencrypt
chmod 644 "${CERT_DIR}/fullchain.pem" || true
chmod 640 "${CERT_DIR}/privkey.pem" || true
chown root:www-data "${CERT_DIR}/privkey.pem" || true
# -------------------------
# Ensure systemd socket not masked and start dnscrypt-proxy
# -------------------------
echo "=== ensuring dnscrypt-proxy socket is unmasked and starting service ==="
# If systemd socket unit exists and is masked, unmask it
if systemctl list-unit-files | grep -q '^dnscrypt-proxy.socket'; then
sudo systemctl unmask dnscrypt-proxy.socket || true
sudo systemctl enable dnscrypt-proxy.socket || true
sudo systemctl start dnscrypt-proxy.socket || true
fi
# start the dnscrypt-proxy service which will bind 0.0.0.0:443
systemctl restart dnscrypt-proxy || {
# If systemd refuses because socket is masked, try to start service directly after unmasking
systemctl daemon-reload
systemctl unmask dnscrypt-proxy.socket || true
systemctl restart dnscrypt-proxy || true
}
sleep 1
systemctl status dnscrypt-proxy --no-pager || true
# -------------------------
# Finalize nginx (keep HTTP-only)
# -------------------------
echo "=== finalizing nginx (HTTP-only, ACME/static only) ==="
# remove temporary enabled file (site still available in sites-available)
rm -f "${NGINX_HTTP_ENABLED}"
# restore other non-SSL enabled sites from backup if they exist (they were moved aside earlier)
if [ -d /etc/nginx/sites-enabled.bak ]; then
for f in /etc/nginx/sites-enabled.bak/*; do
[ -e "$f" ] || continue
mv -f "$f" /etc/nginx/sites-enabled/ || true
done
fi
nginx -t
systemctl reload nginx
# -------------------------
# Install certbot renewal hook to reload nginx and restart dnscrypt-proxy
# -------------------------
mkdir -p "${RENEW_HOOK_DIR}"
cat > "${RENEW_HOOK_DIR}/reload-dnscrypt-nginx.sh" <<'EOF'
#!/usr/bin/env bash
LOG="/var/log/letsencrypt-renewal-reload.log"
{
echo "[$(date)] deploy hook started: RENEWED_LINEAGE=${RENEWED_LINEAGE}"
systemctl reload nginx || systemctl restart nginx || true
systemctl restart dnscrypt-proxy || true
echo "[$(date)] deploy hook finished"
} >> "$LOG" 2>&1
EOF
chmod +x "${RENEW_HOOK_DIR}/reload-dnscrypt-nginx.sh"
# -------------------------
# Health monitor (self-heal + email) - installed
# -------------------------
cat > "${HEALTH_SCRIPT}" <<'HSH'
#!/usr/bin/env bash
set -euo pipefail
DOMAIN="'"${DOMAIN}"'"
EMAIL="'"${EMAIL}"'"
CERT="/etc/letsencrypt/live/${DOMAIN}/fullchain.pem"
LOG="/var/log/dnscrypt_health.log"
TMPDIR="/var/tmp/dnscrypt_health"
mkdir -p "${TMPDIR}"
touch "${LOG}"
STATE_CERT="${TMPDIR}/cert_alert_sent"
STATE_NGINX="${TMPDIR}/nginx_alert_sent"
STATE_DC="${TMPDIR}/dnscrypt_alert_sent"
send_mail() {
local subject="$1"
local body="$2"
echo -e "${body}" | mail -s "${subject}" "${EMAIL}"
echo "[$(date)] Sent alert: ${subject}" >> "${LOG}"
}
DRY_RUN=0
if [ "${1:-}" = "--dry-run" ]; then
DRY_RUN=1
fi
# Certificate expiry
if [ -f "${CERT}" ]; then
enddate=$(openssl x509 -in "${CERT}" -noout -enddate 2>/dev/null | cut -d= -f2 || echo "")
if [ -n "${enddate}" ]; then
endsec=$(date -d "${enddate}" +%s)
now=$(date +%s)
days_left=$(( (endsec - now) / 86400 ))
else
days_left=0
fi
else
days_left=0
fi
if [ "${days_left}" -lt 10 ]; then
SUBJECT="[ALERT] Certificate for ${DOMAIN} expires in ${days_left} days"
BODY="Certificate for ${DOMAIN} expires in ${days_left} days.\n\nCheck: sudo openssl x509 -in ${CERT} -noout -text\n\nThis is an automated alert."
if [ "${DRY_RUN}" -eq 1 ]; then
echo "DRY RUN: ${SUBJECT}"
echo -e "${BODY}"
else
today=$(date +%F)
if [ ! -f "${STATE_CERT}" ] || [ "$(cat "${STATE_CERT}")" != "${today}" ]; then
send_mail "${SUBJECT}" "${BODY}"
echo "${today}" > "${STATE_CERT}"
fi
fi
fi
attempt_restart_and_check() {
local svc="$1"
local statefile="$2"
echo "[$(date)] Attempting restart: ${svc}" >> "${LOG}"
systemctl restart "${svc}" || true
sleep 5
if systemctl is-active --quiet "${svc}"; then
echo "[$(date)] ${svc} active after restart" >> "${LOG}"
[ -f "${statefile}" ] && rm -f "${statefile}"
return 0
else
echo "[$(date)] ${svc} still down after restart" >> "${LOG}"
return 1
fi
}
# nginx
if ! systemctl is-active --quiet nginx; then
if [ "${DRY_RUN}" -eq 1 ]; then
echo "DRY RUN: nginx inactive"
else
if ! attempt_restart_and_check "nginx" "${STATE_NGINX}"; then
SUBJECT="[ALERT] nginx is not running on ${DOMAIN}"
BODY="nginx is not active on $(hostname) as of $(date). Restart attempts failed.\n\nJournalctl (last 50):\n$(journalctl -u nginx -n 50 --no-pager)\n"
today=$(date +%F)
if [ ! -f "${STATE_NGINX}" ] || [ "$(cat "${STATE_NGINX}")" != "${today}" ]; then
send_mail "${SUBJECT}" "${BODY}"
echo "${today}" > "${STATE_NGINX}"
fi
fi
fi
fi
# dnscrypt-proxy
if ! systemctl is-active --quiet dnscrypt-proxy; then
if [ "${DRY_RUN}" -eq 1 ]; then
echo "DRY RUN: dnscrypt-proxy inactive"
else
if ! attempt_restart_and_check "dnscrypt-proxy" "${STATE_DC}"; then
SUBJECT="[ALERT] dnscrypt-proxy is not running on $(hostname)"
BODY="dnscrypt-proxy is not active on $(hostname) as of $(date). Restart attempts failed.\n\nJournalctl (last 50):\n$(journalctl -u dnscrypt-proxy -n 50 --no-pager)\n"
today=$(date +%F)
if [ ! -f "${STATE_DC}" ] || [ "$(cat "${STATE_DC}")" != "${today}" ]; then
send_mail "${SUBJECT}" "${BODY}"
echo "${today}" > "${STATE_DC}"
fi
fi
fi
fi
exit 0
HSH
chmod +x "${HEALTH_SCRIPT}"
# Cron job every 6 hours
cat > /etc/cron.d/dnscrypt_health <<'CRON'
0 */6 * * * root /usr/local/bin/dnscrypt_health.sh >> /var/log/dnscrypt_health.log 2>&1
CRON
touch "${LOG_FILE}"
chown root:root "${LOG_FILE}"
chmod 644 "${LOG_FILE}"
# Dry-run renewal test
certbot renew --dry-run || echo "certbot dry-run failed - check /var/log/letsencrypt/letsencrypt.log"
# Create directory for extra dnscrypt files (blocked-names.txt. allowed-names.txt etc) and populate with basic files live form dnscrypt github - file paths are set in dnscrypt-proxy.toml
if [ ! -d "$DNSCRYPT_USER_FILES" ]; then
echo "Creating $DNSCRYPT_USER_FILES for block and allowed lists..."
mkdir -p "$DNSCRYPT_USER_FILES"
echo "Downloading basic block and allow lists for domains and ips + captive portal info to $DNSCRYPT_USER_FILES..."
curl -o "$DNSCRYPT_USER_FILES/blocked-names.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-blocked-names.txt
curl -o "$DNSCRYPT_USER_FILES/blocked-ips.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-blocked-ips.txt
curl -o "$DNSCRYPT_USER_FILES/allowed-names.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-allowed-names.txt
curl -o "$DNSCRYPT_USER_FILES/allowed-ips.txt" https://raw.githubusercontent.com/DNSCrypt/dnscrypt-proxy/refs/heads/master/dnscrypt-proxy/example-blocked-names.txt
curl -o "$DNSCRYPT_USER_FILES/captive-portals.txt" https://github.com/DNSCrypt/dnscrypt-proxy/blob/master/dnscrypt-proxy/example-captive-portals.txt
fi
# Final client instructions
cat <<'EOF'
INSTALL COMPLETE
Domain: '"${DOMAIN}"'
DoH endpoint: https://${DOMAIN}/dns-query (dnscrypt-proxy terminates TLS on 443)
Nginx: HTTP-only for ACME & static files (port 80)
Alerts to: ${EMAIL}
Browser instructions:
1) Firefox Desktop:
Preferences → Settings → Network Settings → Enable DNS over HTTPS → Custom: https://${DOMAIN}/dns-query
2) Firefox Android:
Settings → General → Network Settings → Use custom DoH: https://${DOMAIN}/dns-query
3) Chrome Desktop:
Settings → Privacy and security → Security → Use secure DNS → Custom: https://${DOMAIN}/dns-query
4) Chrome Android:
Settings → Privacy and security → Use secure DNS → Custom provider: https://${DOMAIN}/dns-query
EOF
echo "=== DONE ==="
The following will ask for two numbers to multiply:
Number1 = int(input("Type in the first number: "))
Number2 = int(input("Type in teh second number:"))
print("The answer is", Number1*Number2)
If you can answer yes to these 3 questions start with ReactJS. Else go with HTML/CSS Vanilla JS
Do you intend to code long term and you are entirely sure this isn't just a hobby but a career path.
Are you more technical than you are creative?
Are you single and have no chance in getting a girlfriend?
Then Yes. React is for you my friend
You can combine two scopes in OR context without closures using orWhere like this:
Subscription::active()->orWhere->future()->get();
According to @shawn 's answer, use sicp packet and set #lang sicp instead of #lang racket solves my problem, but still not quite clear how exactly the problem is solved inside the packet.
In order to prevent redundancy, you want to link tables to each other with relations.
CREATE TABLE Ship (
ShipID INT AUTO_INCREMENT PRIMARY KEY,
ShipName VARCHAR(255) NOT NULL UNIQUE
);
CREATE TABLE Company (
CompanyID INT AUTO_INCREMENT PRIMARY KEY,
CompanyName VARCHAR(255) NOT NULL UNIQUE
);
CREATE TABLE ShipOwnership (
OwnershipID INT AUTO_INCREMENT PRIMARY KEY,
ShipID INT NOT NULL,
CompanyID INT NOT NULL,
StartDate DATE NOT NULL,
EndDate DATE NULL, -- remains null untill it has another owner. or the ship is discarded
FOREIGN KEY (ShipID) REFERENCES Ship(ShipID),
FOREIGN KEY (CompanyID) REFERENCES Company(CompanyID)
);
Who owned the ship when?
INSERT INTO Ship (ShipName)
VALUES ('RMS Titanic');
INSERT INTO Company (CompanyName)
VALUES ('White Star Line');
INSERT INTO ShipOwnership (ShipID, CompanyID, StartDate, EndDate)
SELECT
s.ShipID,
c.CompanyID,
'1909-03-31' AS StartDate,
'1912-04-15' AS EndDate
FROM Ship s
JOIN Company c
WHERE s.ShipName = 'RMS Titanic'
AND c.CompanyName = 'White Star Line';
SELECT c.CompanyName
FROM ShipOwnership o
JOIN Ship s ON o.ShipID = s.ShipID
JOIN Company c ON o.CompanyID = c.CompanyID
WHERE s.ShipName like '%Titanic%'
AND '1911-06-01' BETWEEN o.StartDate AND ISNULL(o.EndDate, '9999-12-31');
This is what git stash create is for. It creates a stash commit and outputs the hash, but it does not add the commit either to the branch or the stash list. (It has no option to include untracked files. The stash commit will eventually be garbage-collected, assuming you don't create a reference to it later.)
for me at azure -> static web app -> the part it says workflow. This file name did not exist in my repository because i renamed it.
CTRL+SHIFT+ENTER... it is infuriating. and I haven't found a workaround
For a (possible) resolution see
https://github.com/dotnet/aspnetcore/issues/64501
Apparently it is "by design" and setting a specific option will make number handling strict(er). I do not quite agree with the design but those are guys with faaaaar more experience than me :-)
Thanks for the clear explanation. This helps a lot. One question: if I keep both projects inside one monorepo for development (Django in a backend folder and Vue in a frontend folder), is that still considered a reasonable approach as long as I deploy them separately? I just want to make sure I’m not creating issues later when I move to production.
you are right , anyways like i have been said to fetch the data from https://www.bloomberg.com/asia bloombeerg so any suggestions?
# Source - https://stackoverflow.com/q/79827652
# Posted by Maj mac, modified by community. See post 'Timeline' for change history
# Retrieved 2025-11-23, License - CC BY-SA 4.0
style.configure('Dark.TButton', background=button_base, foreground=colors['text'], borderwidth=1,
bordercolor=edge, lightcolor=edge, darkcolor=edge, padding=(10, 4))
style.map('Dark.TButton',
background=[('pressed', pressed_bg or hover_bg or button_base),
('active', hover_bg or button_base),
('!disabled', 'red')],
foreground=[('active', colors['text']), ('pressed', colors['text'])],
bordercolor=[('pressed', edge), ('active', edge)],
lightcolor=[('pressed', edge), ('active', edge)],
darkcolor=[('pressed', edge), ('active', edge)])
style.layout('Dark.TButton', [
('Button.border', {'sticky': 'nswe', 'children': [
('Button.padding', {'sticky': 'nswe', 'children': [
('Button.label', {'sticky': 'nswe'})
]})
] })
])
FPDF is an alternative for that: https://www.fpdf.org/
Here is an example: https://www.fpdf.org/en/script/script40.php
ALTER TABLE your_table DROP PARTITION your partition name;
This could delete the partition without locking table
Also Vedal spent years developping his AI so i wouldn't be suprised if his project files reached colossal size by now, the fact he had to buy a new computer for it because his old one wasn't powerful enough to run is proof
Okay..... I won't delete this post, even though I'm s***** like 5 meters of dirt road....
look at this:
let operation = CKQueryOperation(query: query)
operation.desiredKeys = ["name"]
operation.resultsLimit = 50
I forgot to add "products" to the desiredKeys... For everyone: Don't overthink and go through everything again.