Nice question — this kind of task comes up a lot when handling image uploads or API payloads.
I've used a similar approach before: save the Bitmap to a MemoryStream, then convert the bytes to Base64. Using `ImageFormat.Png` usually preserves quality better than JPEG for transparency.
Hope someone posts a clean snippet here!
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
I made a program that checks how high you can make a pyramid from a number of bricks. Hope it helps :
blocks = int(input("Enter the number of blocks: "))
height = int(0)
layer = int(0)
while(blocks > layer):
layer += 1
blocks = blocks - layer
height += 1
print(height)
You can set up a Local Group Policy to only apply to a specific user account (or to Administrators vs Non-Administrators) as so:
Therefore no need for PowerShell here :)
To your main question:
I try to utilize Azure Resource Graph to get all records from Public DNS zones...Does anybody have an idea which table to query to get the records?
This is not possible through Resource Graph. Public DNS records aren't stored there.
I have a bash script that does this by looping through subscriptions:
az account list --query "[].id" -o tsv | while read sub; do
az network dns zone list --subscription "$sub" --query "[].{rg:resourceGroup, zone:name}" -o tsv | \
while read rg zone; do
for type in A AAAA CNAME MX NS PTR SRV TXT; do
case "$type" in
CNAME)
az network dns record-set CNAME list \
--subscription "$sub" -g "$rg" -z "$zone" \
--query "[].{sub: '$sub', rg: '$rg', zone: '$zone', type: 'CNAME', name: name, records: CNAMERecord.cname}" \
-o tsv
;;
A)
az network dns record-set A list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .ARecords[]?.ipv4Address as $ip
| [$sub, $rg, $zone, "A", .name, $ip] | @tsv
'
;;
TXT)
az network dns record-set TXT list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .TXTRecords[]?.value[] as $txt
| [$sub, $rg, $zone, "TXT", .name, $txt] | @tsv
'
;;
NS)
az network dns record-set NS list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .NSRecords[]?.nsdname as $nsd
| [$sub, $rg, $zone, "NS", .name, $nsd] | @tsv
'
;;
AAAA)
az network dns record-set AAAA list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .AAAARecords[]?.ipv6Address as $ip6
| [$sub, $rg, $zone, "AAAA", .name, $ip6] | @tsv
'
;;
MX)
az network dns record-set MX list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .MXRecords[]? as $mx
| [$sub, $rg, $zone, "MX", .name, "\($mx.preference) \($mx.exchange)"] | @tsv
'
;;
PTR)
az network dns record-set PTR list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .PTRRecords[]?.ptrdname as $ptr
| [$sub, $rg, $zone, "PTR", .name, $ptr] | @tsv
'
;;
SRV)
az network dns record-set SRV list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .SRVRecords[]? as $srv
| [$sub, $rg, $zone, "SRV", .name, "\($srv.priority) \($srv.weight) \($srv.port) \($srv.target)"] | @tsv
'
;;
*)
echo "Skipping unknown record type: $type" >&2
;;
esac
done
done
done
Your idea of a "magnetic scan" on an LLM is a powerful way to describe a field of study called interpretability. This field aims to open up the "black box" of LLMs and understand their internal workings. While we don't use MRI machines, researchers use various techniques to see which "areas" of the model are most active in response to different inputs.
If you use free LAB Fit Curve Fitting Software ( www.labfit.net ) to fit a function to your dataset, in dialog box "Results" you find: 1) average values of the parameters, 2) the corresponding uncertainties, 3) the covariance matrix. LAB Fit has an error propagation option (first order approximation). Using this option, you inform 1) the expression to determine the propagated error, 2) the average values, 3) the uncertainties, 4) the covariance matrix. See an complete example at: https://www.labfit.net/fitting.htm . If you insttall LAB Fit, see several videos clicking "Help" and choosing "Show Features (ppsx)". A general idea is available in https://www.youtube.com/@WiltonPereira-d9z
Wilton
Fun fact: the month field is zero-based in the Date object. So to get the first of January, you need to do this:
var date = new Date(2000, 0, 1)
The page contains tabularized parameters which would help in deciding on your choice. The XSSFWorkbook and SXSSFWorkbook behaviour in memory.
This issue only occurs with a specific few releases in version 17.10, it ended up being a known bug that was fixed in later versions and is no longer in issue for recent builds. The solution if you encounter it is to upgrade VS.
Instead of adding "input_shape" to your first layer ...
add Input(shape) as your first layer
classifer.add(keras.Input(shape=(11,)))
Then your layers
classifer.add(Dense(6, activation = 'relu'))
More about the sequential model here: https://keras.io/guides/sequential_model/
Just commenting to let you know I just ran into this issue and I am extremely annoyed about it. WTF is this, I just want an API key. To create a Power Up I also need to host some html page somehwere for an iframe?! I need to host a webhook to get an API key?!
Iframe connector URL (Required for Power-Up)
who thought this was a good idea?!
Your ProxyPass /static/ !
must come before the ProxyPass /
rule so Apache serves static files itself instead of forwarding them to Gunicorn. Also, make sure your Alias /static/
points to the correct static files directory and that Apache has permission to read them. The MIME error happens because Gunicorn returns an HTML 404 page instead of the CSS file when static requests get proxied to it.
If same error is happening in all directory, which mean yarn is picking
"packageManager": "[email protected]"
from home package.json
@RequestBody
with Multipart (file + JSON) returns 403I’m making a small Spring Boot test project where I want to update a user’s profile data (name, email, and image).
If I send the request in Postman using form-data
with only a file, everything works fine.
But as soon as I uncomment the @RequestBody
line and try to send both JSON and file in the same request, I get a 403 Forbidden error.
My controller method:
@PatchMapping("/{id}")
@Operation(summary = "Update user")
public ResponseEntity<User> updateUser(
@PathVariable Long id,
@RequestPart("file") MultipartFile file
// @RequestBody UserDtoRequest userDtoRequest
) {
System.out.println(id);
System.out.println(file);
// System.out.println(userDtoRequest);
return null;
}
My DTO:
@Data
@AllArgsConstructor
public class UserDtoRequest {
@Nullable
@Length(min = 3, max = 20)
private String username;
@Nullable
@Email(message = "Email is not valid")
private String email;
}
I can only accept data from UserDtoRequest
if I use raw JSON in Postman, but then I cannot attach the image.
Question:
How can I send both a file and JSON object in the same request without getting a 403 error?
@RequestBody
expects the entire request body to be JSON, which conflicts with multipart/form-data
used for file uploads.
The correct way is to use @RequestPart
for both the JSON object and the file.
@PatchMapping(value = "/{id}", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public ResponseEntity<User> updateUser(
@PathVariable Long id,
@RequestPart(value = "file", required = false) MultipartFile file,
@RequestPart(value = "user", required = false) UserDtoRequest userDtoRequest
) {
System.out.println("ID: " + id);
System.out.println("File: " + file);
System.out.println("User DTO: " + userDtoRequest);
// TODO: Save file, update user, etc.
return ResponseEntity.ok().build();
}
Method: PATCH
URL: http://localhost:8080/users/{id}
Go to Body → form-data and add:
Key: file
→ Type: File → choose an image from your computer.
Key: user
→ Type: Text → paste JSON string:
{"username":"John","email":"[email protected]"}
multipart/form-data
.@RequestPart
tells Spring to bind individual parts of a multipart request to method parameters.
This allows binary data (file) and structured data (JSON) in the same request.
Using @RequestBody
with multipart is not supported because it expects a single non-multipart payload.
✅ Tip: If Spring can’t parse the JSON in user
automatically, add:
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
or ensure that the user
field in Postman is exactly valid JSON.
2025:
same problem here.
my findings:
with pyarrow.set_memory_pool(pyarrow.jemalloc_memory_pool())
and pyarrow.jemalloc_set_decay_ms(0)
the memory will be released eventually if you have enough memory to prevent OOM kill before the full gc triggered.
Exactly same issue I'm facing with Drools 8 or Drools 10.1.0. [Drools 10.1.0]
Everything works fine in Intellij. If deploy it to Linux (RHEL 7.0) getting NPE.
Caused by: java.lang.NullPointerException: Cannot invoke "org.kie.api.KieServices.newKieFileSystem()" because "this.ks" is null
at org.kie.internal.utils.KieHelper.<init>(KieHelper.java:52)
Added META-INF as below:
kie.conf content:
# KIE configuration file for Drools
# Example: Specify the KieServices implementation
org.kie.api.KieServices = org.drools.compiler.kie.builder.impl.KieServicesImpl
org.kie.internal.builder.KnowledgeBuilderFactoryService = org.drools.compiler.builder.impl.KnowledgeBuilderFactoryServiceImpl
Simple Code:
public static StatelessKieSession buildStatelessKieSession(List<String> drlFiles) {
KieHelper kieHelper = new KieHelper();
for(String drlFile : drlFiles){
kieHelper.addContent(drlFile, ResourceType.DRL);
}
return kieHelper.build().newStatelessKieSession();
}
Go to node_modules/react-native-gesture-handler/android/src/main/java/com/swmansion/gesturehandler/core/GestureHandlerOrchestrator.kt
at line 193 change awaitingHandlers.reversed() to awaitingHandlers.asReversed()
https://github.com/software-mansion/react-native-gesture-handler/issues/3621
Some websites I've tried did not work eyllanesc solution. You can try to add these params at the very start of your PyQt app:
os.environ['QTWEBENGINE_CHROMIUM_FLAGS'] = '--ignore-ssl-errors --ignore-certificate-errors --allow-running-insecure-content --disable-web-security --no-sandbox'
os.environ['QTWEBENGINE_DISABLE_SANDBOX'] = '1'
The points of a square can only be one of two lengths. A side or a diagonal. Just given any set of 4 points we don't know which pair are diagonally opposite or which are adjacent
Without loss of generality you only need to check 4 distances:
║A,B║;
║A,C║;
║A,D║;
And one other ║B,C║; ║C,D║; or ║C,D║.
As what @Floris said, using square distances is easiest.
Of the three distances two will be equal (this distance will be the side length (squared)), the third must be a diagonal and thus 2 * side^2.
To pick the last pair, you need to pick the point which is diagonally opposite p1 and one of the other two points (B or C). This distance must be equal to the side length.
This does not, as @Kos pointed out, solve for the situation of a bow-tie shape. Yet if this is a consideration then the ordering of the vertices matters, then you can treat the input as a list of ordered points, and then you can setup the function arguments so that A and C are diagonally opposite, and so are B and D.
You will need EXT:crawler 12.0.9 or later ( unreleased as of the time of writing ) due to this bug:
https://github.com/tomasnorre/crawler/issues/1140
The problem was that I installed Visual Studio Code from the Flatpak store. Downloading the .deb file from the VSCode website and installing it and manually downloading the zip file of the Flutter SDK fixed the problem.
This is a bug of Flutter/Chrome. It seems to be caused by Google stopping to automatically use the cross platform Software Renderer SwiftShader if needed. A hoped for fix has not been able to mitigate this yet.
Possible Workaround:
--disable-features=DisallowRasterInterfaceWithoutSkiaBackend
The size of the parameter set in the two representations
of either 5 (2 center coordinates, 2 radii, 1 alignment angle)
or 6 (quadratic form $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$) is
due to the fact that the algebraic equation/form is invariant
to the space of solutions (x,y) if it is divided through any
of the 6 (nonzero coefficients): $x^2+(B/A)*x*y+(C/A)*y^2+(D/A)*x +(E/A)*y+F/A=0$.
So the quadratic form has effectively only 5 independent parameters.
I faced this same issue as I had added incorrect credentials.
So Check your Database URL or environment variables.
No — you can’t directly create a foreign key constraint on an expression like DATE(created_date) in standard SQL (including MySQL, PostgreSQL, SQL Server, Oracle, etc.).
Foreign key constraints must be defined between actual columns, not computed expressions. Both columns must also match in data type and precision.
IIS will restart the site if it detects the web.config file has changed (by default).
So assuming you can programmatically read and then save that file (you don't even need to make any change), IIS will handle the rest.
This works not just in Blazor (.net core) but also .NET framework (MVC and webforms) and even classic ASP, if you really, really need to.
Probably not the "correct" way, but it is simple and works and given that this has worked for 20+ years, all the way back to classic ASP, it seems pretty robust.
I have put this in old web forms apps for years, used it many times (normally to clear caches) and never had any issue, such as mangling the web.config.
Try adding this setting as well:
SOCIALACCOUNT_EMAIL_AUTHENTICATION_AUTO_CONNECT = True
It goes at the same level as SOCIALACCOUNT_PROVIDERS
if you want to install something before your main package, that also happens to run silently, then I would suggest creating your own wrapper that does that - it's actually the only solution I can think of.
A possible solution could be creating a wrapper MSI that launches .NET runtime first, then your main package - this way, you ensure that .NET is always installed before the main package.
Make sure that the wrapper MSI is not Windows Installer authored (i.e. it does not register in Control Panel) so you don't end up having duplicate entries.
Another solution could be to use some tools that already do this out of the box.
That's a mypy bug. I have fixed that in #19422 a few days ago, your original snippet works on mypy master (will likely be released in 1.18.0).
Make sure you don't have the following key in your Info.plist
<key>NSUserTrackingUsageDescription</key>
<string>...</string>
Then run flutter clean
& flutter pub get
rm -rf ios/Pods ios/Podfile.lock pubspec.lock
to delete the current Pods and run the app again to generate them again
Explore a variety of fun and expressive kaomojis by visiting this website: https://www.aimojitok.com/kaomoji. Perfect for adding personality to your messages!
Wrap your trailingIcon
widget with a SizedBox()
SizedBox(
height: 50,
width: 50,
child: Transform.translate(
offset: const Offset(13, -13),
child: const Icon(Icons.arrow_drop_down, size: 40),
),
),
SHOW TABLE schema_name.table_name
Worked for me .Kindly use it ! No need of any aws util packages if you just want the ddl!
Where id between 3 and 4
Would also be a possibility.
className={`myClass ${index && "active"}`}
is ServerAliveInterval
could help me if my ssh connection to the server is also very unstable and I need just to restart my server in anyway, to make it alive again, even when ssh connection will connect for milliseconds.
Do I need set maximum ServerAliveInterval
and some big number of ServerAliveCountMax
?
Thanks for the answer
You're observing different execution orders because of how JavaScript's event loop handles **macrotasks** and **microtasks**.
- `setTimeout(..., 0)` is a **macrotask** (added to the macrotask queue).
- `.then()` from a `fetch` is a **microtask** (added to the microtask queue).
- Microtasks are **always executed before** the next macrotask, after the current execution stack is empty.
---
When you use `setTimeout(..., 0)` and `fetch(...)`, here’s what happens:
1. Synchronous code runs first.
2. Microtasks (like `.then()` from `fetch`) are processed.
3. Then macrotasks (like `setTimeout`) are processed.
That’s why in most browsers:
console.log(“A”); // sync
fetch(…).then(…) // microtask
setTimeout(…, 0) // macrotask
console.log(“B”); // sync
---
Changing to `setTimeout(..., 1)` gives the event loop more time, so the fetch may resolve before the timeout happens — but it's **not guaranteed**. It's a race condition depending on network timing and browser internals.
---
Node uses a slightly different event loop model. The key difference is:
- `setTimeout` goes into the **Timers phase**
- `fetch` is not native to Node and uses the **microtask queue** after a Promise resolves
So in Node, `setTimeout(..., 0)` often logs before `.then()` due to **phase timing differences**.
---
- Microtasks (`Promise.then`, `fetch`) run **before** macrotasks (`setTimeout`).
- Timing differences in browsers vs. Node.js are due to **event loop phase priorities**.
- `setTimeout(..., 0)` does **not mean immediate execution**, just “as soon as possible after current tasks.”
Use setText :
MimeMessage m;
m.setText(body, "UTF-8", "html");
If you look at the source code, in the end you'll get :
m.setContent(body, "text/html; charset=UTF-8");
I had this issue because of timeouts not properly set ON BOTH my backend function and my frontend caller.
We can create EqualityComparer object with the helper method like this now:
var set = new HashSet<MyClass>(comparer: EqualityComparer<MyClass>.Create((a, b) => a.Id == b.Id));
Nowdays, in Reqnroll, you can use External Data Plugin to acomplish just that! It supports various file formats, including JSON and the file structure you have present should work just fine :)
Finally figured out how to drag and drop files from a web app into desktop software—super smooth! If you need a feature like this built, dev technosys can do it. You can also hire software developers from them for custom integration.
Got anyone a solution for this?
According to https://github.com/vercel/next.js/issues/57005#issuecomment-1779807828 the error has been fixed in nodejs 21.1.0. So, instead of downgrading, an upgrade may be also a solution.
Instead of loading the entire 10GB dataset into a single NumPy array and then passing it around, you can create a generator to process the data in a stream. A generator is a special type of Python function that returns an iterator, which yields items one by one instead of all at once. This allows you to process the data as it's read from the file, effectively keeping only one slice of it in memory at any given time.
For venv you have to do it with
deadsnakes
For Ubuntu default repository all python versions and latest ones are not there so get the packages from repository and then install any version using apt
sudo apt install python3.xx
And then while setting up the virtual environment use that Python -
python3.xx -m venv .venv
Thanks!
You can adjust the gradient with the code below
const LinearGradient(
colors: [Colors.blue, Colors.black, Colors.black],
stops: [0.0, 0.5, 1.0], // Adjust stops to control the gradient spread
begin: Alignment.centerLeft, // Start from left
end: Alignment.centerRight, // End at right
),
What you can do as suggested by @usr1234567 is to compile a short code that includes _Float16
.
Here's a minimal working code :
cmake_minimum_required(VERSION 3.12)
project(MyProject C)
include(CheckSourceCompiles)
check_source_compiles(C "
#define __STDC_WANT_IEC_60559_TYPES_EXT__
#include <float.h>
int main() {
_Float16 x = 1.0f16;
return 0;
}
" HAVE_FLOAT16)
if(HAVE_FLOAT16)
message(STATUS "_Float16 is supported by the compiler.")
else()
message(WARNING "_Float16 is not supported by the compiler.")
endif()
I've found __STDC_WANT_IEC_60559_TYPES_EXT__
here : https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html
Using VideoCapture the start of the usb camera lasts 1 minute. And the same with OpenCVFrameGrabber. Does anybody know how to make it start inmediatly?
I had the same issue with httpClient, however, changing http version to HTTP version 1.1 helped.
.version(HttpClient.Version.HTTP_1_1)
For anyone still getting such error:
Github Desktop lets you choose which shell to use for the git operations.
Try setting a different shell one via:
File -> Option -> Integrations -> Shell
Thanks d-kleine on Github for the suggestion.
I was looking for that, but for Photoshop. Is it possible? I need to make a text with a shaped with that corners
The current OpenAI TypeScript/JavaScript SDK ( openai
@ 5.12.x, released 8 Aug 2025 ) still vendors a copy of zod-to-json-schema
that depends on Zod 3-only internals. Trying to pair it with Zod 4 (or with the 3.25.68+ branch that began preparing for v4) leads to compiler/runtime failures such as the missing ZodFirstPartyTypeKind
export.
Can you load at least one binary ? then:
I would try to not store them in a dictionary (all of them in one) they probably can be loaded from files named with dictionary keys "....elsewhere in the code"
Can't even load one binary ? then:
chunking it is I dont understand why that would be problem with order ?
or buy more ram use something that is not Python as it will add memory overhead
you can install the jaxlib with cuda support directly on windows.
https://github.com/pymc-devs/pymc/issues/7036
Something that worked for me, after hours looking for an answer was to delete the local Jupyter setting folder.
.jupyter/*
This helped - https://github.com/jupyter/notebook/issues/2359#issuecomment-648380681
When using material3 TextField, you can leverage inputTransformation
:
import androidx.compose.material3.TextField
import androidx.compose.foundation.text.input.InputTransformation
val maxLength = 10
TextField(
state = rememberTextFieldState(),
inputTransformation = InputTransformation.maxLength(maxLength),
)
libunwind relies on DWARF .eh_frame sections to unwind the stack properly
to ensure unwind info is generated compile with: -funwind-tables -fno-omit-frame-pointer
Also ensure .eh_frame
is linked.
To build on @DibsyJr's answer, I've created an attribute that I attach on the controller/method that I want to block, and check it in the OnCheckSlidingExpiration event handler. I find that more flexible than just checking the path property.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class DoNotRenewAuthCookieAttribute : Attribute;
options.Events.OnCheckSlidingExpiration = context =>
{
if (context.HttpContext.GetEndpoint()?.Metadata.Any(e => e is DoNotRenewAuthCookieAttribute) ?? false)
{
context.ShouldRenew = false;
}
return Task.CompletedTask;
};
GTK 3’s Wayland backend doesn’t expose the pointer enter event serials needed for the pointer-warp protocol. The serial values are low-level Wayland details that GTK 3 abstracts away and doesn’t provide in its API.
Your options are:
In short, there’s no straightforward way in GTK 3 to get the pointer enter serial. You would likely need a parallel listener or move to GTK 4 for native support.
The expr function doesn't automatically translate the Python in
operator to its SQL equivalent when working with array types. The standard Spark SQL function for checking if an element exists in an array is array_contains
.
You should be able to fix by using array_contains
within your filter expression.
Pseudocode
from pyspark.sql import functions as F
df = df.withColumn(
'target_events',
F.expr('filter(events, x -> array_contains(target_ids, x.id))')
)
I don't know if this tutorial may be useful, but I'll link it anyway^^:
I'm going to go into some technical details of the explanation of
[why] lookbehind assertion[s] must be fixed length in [PCRE] regex[es],
using the specific case that brought me here to the question. (I came understanding the lack of support for variable lookbehind assertions, just looking for some details of how to write fixed-length assertions.) Note that I'm basically following the answer of @Alan-Moore but giving a specific example. Other answers and comments were all very helpful.
I'll make a toy version of my problem and tweak it a bit to match it to the problem with variable length lookbehind assertions.
I have a group of classifications that are used to mark linguistic and other characteristics of speeches made by politicians, celebrities, scientific researchers, etc. These need to be inserted into the text at the point where they occur. e.g. If the speaker gave a date with the phrase,
This hasn't been done since 1762 and shan't be done ever again if I can help it!
and dates were to be marked with bcde_
(we'll talk about the markers, below), the annotated version would be changed as
This hasn't been done since 1762 bcde_ and shan't be done ever again if I can help it!
To make these different from the actual words in the speeches, these are groups of four letters, either repeated or in-a-row, followed by an underscore. So, a few examples would be
aaaa_
, abcd_
, bbbb_
, bcde_
, ... , hhhh_
, ... , mnop_
, ... wxyz_
, ... zzzz_
.
Note that some combinations without the trailing underscore, such as eeee
, mmmm
, oooo
, zzzz
, are found in some linguistic corpora. That's one reason for the trailing underscore. FYI (and maybe TMI), we don't actually use all the combinations, but I'll pretend that we do for the example. However, the regex engine would need to check these even if it had an intelligent implementation. Note also that these are inserted using a nicely UX-designed GUI and the linking between classification string and meaning is done by some nice software which includes PCRE-compliant regex matching.
I won't go into details why, but whenever there is a zzzz_
or yyyy_
or xxxx_
anywhere in the speech (and there can be only one of those three per speech), it works out that one and only one of the four-contiguous-letters-plus-underscore classifications needs to be part of the annotation of the same speech. If one of zzzz_
or yyyy_
or xxxx_
is there, there can't be zero members of the set, {abcd_
, bcde_
, cdef_
, defg_
, ...
, vwxy_
, wxyz_
} in the annotated speech, and there can't be 2 or 3 or 4 or more members of the set. One-and-only-one if there is a /([x-z])\1\1\1_/
. There can be any number of /([a-w])\1\1\1_/
, i.e. aaaa_
, bbbb_
, cccc_
, ..., if we have one of /([x-z])\1\1\1_/
.
So, to check if this rule is being followed, I write a couple of grep
commands, the second of whose regex has a negative lookahead and a negative lookbehind. I do it naïvely, first, without concern for the grep: length of lookbehind assertion is not limited
error. Note that I won't list all of the (26 - 3 =) twenty-three possible in-a-row classifications, I'll use (abcd_|bcde_| ... |wxyz_)
. This will compile and run, but it's not the code that will give the desired results. The real code uses all 23 possibilities.
And no, I don't type all 23 possibilities each time.
$ grep -P " ([x-z])\1\1\1_" file_with_all_annotated_speeches_one_per_line \
> to_check_speeches_test_05
$ # Down here, on the 2nd line vv is the problem
$ grep -P \
'(?<! (abcd_|bcde_| ... |wxyz_).*)'\
'( (abcd_|bcde_| ... |wxyz_))'\
'(?!.* (abcd_|bcde_| ... |wxyz_))' \
to_check_speeches_test_05 \
> all_acceptable_speeches_as_per_05
$ # `comm -23' will give us those lines (speeches) that are only in the
$ #+ first argument (file), but not those in the second one nor those
$ #+ which are in both file
$ comm -23 <(sort to_check_speeches_test_05) \
<(sort all_accceptable_speeches_as_per_05) \
> not_acceptable_speeches_as_per_05
Okay, let's look at some speeches. Let's say that the Gettysburg Address is now a wrong version, so we need to change it to the right version. I'll just give relevant parts (any parts that have a classifier string).
Wrong Version:
Fourscore and rstu_ seven years jjjj_ ago
...
The world will little note, nor long wwww_ remember zzzz_ what
...
by the people, for the people, shall not perish mnop_ from
the earth. _-lincoln-getty-1863-_
Note that, in the file, this would all be on one line. If not, the grep
would become a lot harder.
We can't have both rstu_
and mnop_
in the same line (speech) as zzzz_
.
Right version:
Fourscore and seven years jjjj_ ago
...
The world will little note, nor long wwww_ remember zzzz_ what
...
by the people, for the people, shall not perish mnop_ from
the earth. _-lincoln-getty-1863-_
I'm not sure how the order of processing the negative lookbehind and the negative lookahead work (perhaps, once the negative lookahead fails, i.e. finds the matching string, the regex exits; perhaps the negative lookbehind starts first; idk), but I'm going to pretend that we'll get a negative lookahead from the rstu_
and a negative lookbehind from the mnop_
. (If the speech has zzzz_
and three of / (abcd_|bcde_| ... |wxyz_)/
, I'm pretty sure both the lookahead and lookbehind would be run.)
The wrong version has a negative lookahead. (I think that's the case; if not, let's pretend.) That means the lookahead regex starts at rstu_
and runs one regex pass on up to 1491 characters. It finds mnop_
after going through 1452 characters. That means it fails, and I don't know if checks would continue. Still, I'm going to make the next assumption. Anyone is welcome to comment about whether I'm "running the regex engine" right. I think I'll keep this version with assumptions, anyway, but I'd like to know (and probably note) what a PCRE-compliant engine actually does.
Now let's assume that we also get a negative lookbehind from mnop_
. (There might be a smarter algorithm, but lets assume that) the engine first moves one character back to the space ('
') then goes through a minimum of 5 characters (looking for 5 characters matching any of the letters-in-a-row_plus_underscore strings) or a maximum of 45 characters (to the end of the line) looking for any of (abcd_|bcde_| ... |wxyz_)
. Then it goes back to the 'h
' in perish
and looks through 5|46 characters for a letters-in-a-row string. Then 's
' with 5|47, 'i
' with 5|48, 'r
' with 5|49, 'e
' with minimum 5 or maybe 6—to the start of mnop_
—or maybe up to 50—to the end of the line, 'p
' with 5|7|51, ... all the way back through 1440 more characters to rstu_
's '_
' which runs through 5|1448|1491, 'u
' which runs through 5|1449|1492, 't
' with 5|1449|1493, 's
' with 5|1450|1494, then finally 'r
', where I think it would only go through 5 characters until the rstu_
matched. Using the minimums, that's 1452 * 5 = 7262
steps, five times as many as the lookahead. (The exact multiple isn't a coincidence.) These thousands of steps are for the Gettysburg Address, which is famous for how short it is! (1487 characters, if you use the transcript of Cornell University's copy (linked above).
I won't do more details (sleep time), but imagine finding a mistake where one xxxx_
has three different (abcd_|bcde_| ... |wxyz_)
instances in John F. Kennedy's We choose to go to the moon speech. Or imagine checking, whether there are errors or not, through the famous-for-its-length 1960 speech of Fidel Castro at the United Nations with its ~200k characters. If there are errors that make you use negative lookbehinds, that could be a lot of computing cycles, especially since we're likely dealing with recursion in the regex engine's details.
So, the way I see it, there are two things I could do.
The first, painful way is to use the oft-repeated answer given here to do as @Dávid-Horváth suggested and
try to branch your lookbehind part to fixed length patterns if possible.
That would involve splitting up the ORs (|
) in the first (?<!(abcd_|bcde_| ... |wxyz_).*)
, a process that would begin something like
'/(?:(?<! abcd_.*)|(?<! bcde_.*)| ... |(?<! wxyz_.*))'\
'( (abcd_|bcde_| ... | wxyz))(?!.* (abcd_|bcde_| ... |wxyz_))'
without metaprogramming (archived Wikipedia site, as I see it), I don't think so.
If the lengths work out for you
My workaround is to make a copy of the file-with-one-speech-per-line, then take all the classification strings to the beginning of each line. Because other combinations will doubtless need regexes, I've figured the longest possible string (I think), with just
(alphabet) (only one of x, y, z)
( 26 - 3 ) +
(alphabet of in-a-row-letters)
26 =
49 possible classification strings
(I didn't note that no classification string may be repeated, but that's the case.)
I think that's the max we'd need, with something like
/(?<! (aaaa_|abcd_|bbbb_|bcde_| ... |wwww_|wxyz_).{0,50})/
for the negative lookbehind part.
I've experimented on my system1 and found that I can go up to .{0,251}
before I get a complaint about grep: branch too long in variable-length lookbehind assertion
. If I went really high (past the 50000
range into the into the .{0,70000}
, I got a complaint about grep: number too big in {} quantifier
.
I'm not really sure why the number, 252
is considered too big for the lookbehind case. Maybe it has to do with the amount of memory needed to carry out such a lookbehind.
I checked and found that my length would not still be okay if I needed to count characters, figuring that a max would be
49 classification strings * 5 letters + 1 space + 49 underscores = 295
which is greater than 251
. However, I figure I could rewrite the pattern as
/(?<! (aaaa|abcd|bbbb|bcde| ... |wwww|wxyz)_.{0,251})/
giving me
49 * 4 + 1 + 1 = 198
, easily within the allowed length. I figured I might as well use the complete 251
, just in case.
Feel free to comment about my misunderstandings of PCRE regexes. I love to learn, and I'd love to make this answer as accurate as possible.
Notes:
[1]
My system:
$ uname -a
CYGWIN_NT-10.0-19045 MY-MACHINE 3.6.3-1.x86_64 2025-06-05 11:45 UTC x86_64 Cygwin
$ bash --version | head -n 1
GNU bash, version 5.2.21(1)-release (x86_64-pc-cygwin)
how to resolve it's bug?
i still generate this bug.
how to resolve it?
In Vscode
Press "ctrl + shift + x" to open extension
search and install Laravel Intelephense
If for any reason, someone is forced to use PHP 5.2 with PHPMailer 5.2.28 and through an office365 account, just set $crypto_method to STREAM_CRYPTO_METHOD_SSLv23_CLIENT.
Works with smtp.office365.com, port 587, SMTPSecure = 'tls'.
Still, you need to have a PHP server which supports TLS v1.2 and (probably) openssl.
You can use this docker image: postgis/postgis
It contains PostGis
Docker image link:
https://hub.docker.com/r/postgis/postgis/
I'm not sure why but it seems that you have disabled (or removed?) NuGet package sources. For the WinUI 3 Gallery, your package sources should look like this:
Also make sure you have internet connection in case the required NuGet packages are not cached and VS needs to download them.
In PHP you can directly access the index like this:
$_FILES['expediente']['name'][1]
$_FILES['expediente']['name'][2]
$_FILES['expediente']['name'][4]
Your IDE thinks @json(...) is a JavaScript decorator (which only works in TS), but in Blade it’s just a server-side shortcut that Laravel turns into real JSON before sending to the browser.
Fixes
Ignore the red squiggles - Blade will compile it correctly.
Tell you editor "*.blade.php = Blade" (or Install a Blade extension).
Or Swap to PHP's json_encode if you prefer
php
roles: {!! json_encode($roles) !!},
In my case, this happened because for some reason my ~/.zcompdump file became corrupt. So I had to delete it with...
rm -f ~/.zcompdump*
And then start a new terminal session
I suggest you to try the firebase-js-sdk, it's very easy to integrate for any js or ts based app:
https://docs.expo.dev/guides/using-firebase/#using-firebase-js-sdk
What put me on the right track was thinkOfaNumber's answer to this question.
What I was having trouble with was that I should have used was
"$HOME\.android\debug.keystore"
NOT %HOMEPATH%\.android\debug.keystore
NOT $env:HOMEPATH\.android\debug.keystore
on PowerShell on Windows. The one with %HOMEPATH%
for some reason still outputted a SHA1 without warning me that the file was not found.
$ErrorActionPreference = 'Stop'
[string]$JdkRoot = Resolve-Path "C:\Program Files\Android\Android Studio\jbr"
[string]$JdkBin = Resolve-Path "$JdkRoot\bin"
[string]$DebugKeystoreFilePath = Resolve-Path "$HOME\.android\debug.keystore"
$ErrorActionPreference = 'Continue'
& "$JdkBin/keytool" -exportcert -alias androiddebugkey -keystore $DebugKeystoreFilePath -storepass android | openssl sha1 -binary | openssl base64 | Set-Clipboard; Write-Host "Copied to clipboard."
See also How to see Gradle signing report
Thank you @Jimi to mention the root of problem. As you said, Handle
of control was not created. DGV has a LoadingScreen
when user wants to assign value as DataSource
but, this screen is a form and must cover entire area of DGV. Meanwhile, this screen is visible in front of other controls and since the actual size and position of hidden DGV is not accessible, finally LoadingScreen
is displayed in wrong size and position.
Solution
Inside code lines where the LoadingScreen
must be shown, IsVisible
method can return actual situation to decide possibility of showing LoadingSreen
. As you can see in the following code, two factor for this purpose is checked: 1) IsHandleCreated
(As you mentioned) 2) DGV is visible on screen.
public static Form Create(
Control control,
bool coverParentContainer = true,
bool coverParentForm = false,
string title = "Loading...",
double opacity = 0.5)
{
var frm = new CesLoadScreen();
frm._title = title;
frm.Opacity = opacity;
if (!IsVisible(control))
return frm;
SetLoadingScreenSize(frm, coverParentContainer, coverParentForm, control);
control.Resize += (s, e)
=> SetLoadingScreenSize(frm, coverParentContainer, coverParentForm, control);
frm.Show(control.FindForm());
Application.DoEvents();
return frm;
}
public static bool IsVisible(Control control)
{
Rectangle screenBounds = Screen.FromControl(control).Bounds;
Rectangle controlBounds = control.RectangleToScreen(control.ClientRectangle);
bool isOnScreen = screenBounds.IntersectsWith(controlBounds);
if (!control.IsHandleCreated || !isOnScreen)
return false;
if (!control.Visible)
return false;
return true;
}
With intellij you can use Exclude classes and packages option in Run configuration -> modify options.
I use a module package working fine. the below step:
for html & core embla, you can get from this URL: https://codesandbox.io/p/sandbox/ffj8m2?file=%2Fsrc%2Fjs%2Findex.ts
npm install embla-carousel --save
import { AfterViewInit, Component } from '@angular/core';
import EmblaCarousel, { EmblaOptionsType } from 'embla-carousel';
import Autoplay from 'embla-carousel-autoplay';
import ClassNames from 'embla-carousel-class-names';
import {
addDotBtnsAndClickHandlers,
addPrevNextBtnsClickHandlers,
setupTweenOpacity,
} from '../../../../core/embla';
export class CarouselComponent implements AfterViewInit {
emblaOptions: Partial<EmblaOptionsType> = {
loop: true,
};
plugins = [Autoplay(), ClassNames()];
ngAfterViewInit(): void {
const emblaNode = <HTMLElement>document.querySelector('.embla');
const viewportNode = <HTMLElement>emblaNode.querySelector('.embla__viewport');
const prevBtn = <HTMLElement>emblaNode.querySelector('.embla__button--prev');
const nextBtn = <HTMLElement>emblaNode.querySelector('.embla__button--next');
const dotsNode = <HTMLElement>document.querySelector('.embla__dots');
const emblaApi = EmblaCarousel(viewportNode, this.emblaOptions);
const removeTweenOpacity = setupTweenOpacity(emblaApi);
const removePrevNextBtnsClickHandlers = addPrevNextBtnsClickHandlers(
emblaApi,
prevBtn,
nextBtn,
);
const removeDotBtnsAndClickHandlers = addDotBtnsAndClickHandlers(emblaApi, dotsNode);
emblaApi
?.on('destroy', removeTweenOpacity)
.on('destroy', removePrevNextBtnsClickHandlers)
.on('destroy', removeDotBtnsAndClickHandlers);
}
}
I am facing the same problem on my website. Somebody please help. https://4indegree.com
context
is present in the request parameters as you can see here in the v19 docs: https://v19.angular.dev/api/common/http/HttpResourceRequest
=cell("filename")
will return circular reference. I upgraded to 365 and it started.
Try it on a brand new file, totally blank sheet. Nothing will show until saved, after which you will get the answer, but if you have a lot of these throughout your spreadsheet, you will will the circular reference popping up more and more.
Yes, you have to buy a real ESP32/Arduino board to run your code. The cause of the error is because you did not connect any board or else if you are trying to work with arduino without having a physical board you can try a simulator like https://www.tinkercad.com/circuits
I my casen this error occure because i have tried to create whe same Index twice...
Another possibility would be:
select user_id, first_name, last_name
from table_name
where user_id between 1 and 3;
To filter oci custom images using --query to filter out Custom "operating-system": "Custom"
oci compute image list --compartment-id ocid1.tenancy.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxx --all --auth instance_principal --query "data[?\"operating-system\" == 'Custom']" --output table
I wager this is about versions of kotlin and the order in which to have ksp in your project.
This is how I did it:
my current kotlin version is 2.02
Step 1: add this to your to your module build.gradle.kts as a plugin
id("com.google.devtools.ksp") version "2.2.0-2.0.2"
IMPORTANT: then sync your project
Step 2: then add these to your implementation:
implementation("com.google.dagger:dagger-compiler:2.51.1")
ksp("com.google.dagger:dagger-compiler:2.51.1")
update to latest version then sync your project.
DO NOT ADD ALL THE CODE AT ONCE THEN SYNC YOUR BUILD WILL FAIL LIKE THE WAY MINE DID. Am using Android Studio Narwhal 2025.1.2
Cheers
This makes "is_locked" behave like a variable, not a function:
class Lock:
def __init__(self):
self.key_code = "1234"
self._is_locked = True # underscore to mark internal
def lock(self):
self._is_locked = True
def unlock(self, entered_code):
if entered_code == self.key_code:
self._is_locked = False
else:
print("Incorrect key code. Lock remains locked.")
@property
def is_locked(self):
return self._is_locked
def main():
my_lock = Lock()
print("The lock is currently locked.")
while my_lock.is_locked: # no () needed now
entered_code = input("Enter the key code: ")
my_lock.unlock(entered_code)
print("The lock is now unlocked.")
Anser from Raymond Chen
How can I detect that Windows is running in S-Mode?
https://devblogs.microsoft.com/oldnewthing/20250807-00/?p=111444
I've developed a Terraform script intended to execute the three key steps:
resource "azuread_application" "test-app" {
display_name = "test-app"
identifier_uris = ["https://test.onmicrosoft.com"]
sign_in_audience = "AzureADandPersonalMicrosoftAccount"
api {
requested_access_token_version = 2
}
single_page_application {
redirect_uris = ["https://redirect-uri.com/"]
}
required_resource_access {
resource_app_id = data.azuread_application_published_app_ids.well_known.result.MicrosoftGraph
resource_access {
id = data.azuread_service_principal.msgraph.oauth2_permission_scope_ids["offline_access"]
type = "Scope"
}
resource_access {
id = data.azuread_service_principal.msgraph.oauth2_permission_scope_ids["openid"]
type = "Scope"
}
}
}
resource "azuread_service_principal" "test_app_service_principal" {
client_id = azuread_application.test-app.client_id
}
resource "azuread_service_principal_delegated_permission_grant" "test_app_scopes_permission_grant" {
service_principal_object_id = azuread_service_principal.test_app_service_principal.object_id
resource_service_principal_object_id = data.azuread_service_principal.msgraph.object_id
claim_values = ["offline_access", "openid"]
}
However, I'm still encountering the same error during execution.
When I create the app by sending Graph API requests via Postman, everything works as expected. The script runs within a pipeline that uses the same credentials to obtain the token for Postman requests.
Additionally, the Azure Active Directory provider is configured with credentials from Azure B2C and not Azure AD so that aspect should be correctly set up.
provider "azuread" {
client_id = data.azurerm_key_vault_secret.ado_pipeline_sp_client_id.value
client_secret = data.azurerm_key_vault_secret.ado_pipeline_sp_client_secret.value
tenant_id = data.azurerm_key_vault_secret.b2c_tenant_id.value
}
Is this script missing something? Is there any difference between using the Graph API requests or terraform for creating app registrations?
No such file or directory: 'C'
In your error, Python thinks your file path is just "C".
This usually happens when the path you pass to open() is incomplete, incorrectly formatted, or broken up into pieces before open() gets it.
The "..." you’ve put in your code ,is not valid Windows paths can’t have "..." as a directory placeholder.
You must use the full exact path to the file.
Yes, you can run process type -Background and process compatibility - cross-platform on Kubernetes!
http://googleusercontent.com/generated_image_content/0 Create an image A lay practitioner who lives humble and enjoying in accordance without the principle such as "when hungry; he eats when tired, he sleeps" at a hut beside lake surrounded deep mountains in late Goryeo Dynasty.
When I moved to node 20.19.4, problem solved.
I think this should be documented in primeng website.
Thank you,
Zvika
use this> Process an Authorization Reversal
**POST:**https://apitest.cybersource.com/pts/v2/payments/{id}/reversals
No need to use a regex for this at all! They are a solution of "last resort" - certainly they are very powerful, but they are also resource heavy! As a rule of thumb, you're better off using PostgreSQL's built-in functions in preference to regexes.
Sorry about the state of the results parts of my post (see fiddle) but this new SO interface is God awful!
So, I did the following (all of the code below is available on the fiddle here):
SELECT
label,
REVERSE(label),
SPLIT_PART(REVERSE(label), ' ', 1),
RIGHT(SPLIT_PART(REVERSE(label), ' ', 1), 1),
CASE
WHEN RIGHT(SPLIT_PART(REVERSE(label), ' ', 1), 1) IN ('R', 'W')
THEN 'KO'
ELSE 'OK'
END AS result,
should
FROM
t;
Result:
label reversesplit_partrightresultshouldThomas Hawk AQWS456654SWQA kwaH samohT654SWQAAOKOKCecile Star RQWS456654SWQR ratS eliceC654SWQRRKOKOMickey Mouse WQWS456654SWQW esuoM yekciM654SWQWWKOKODonald Duck SQWS456654SWQS kcuD dlanoD654SWQSSOKOK
It's more general than the other code, because it'll pick out the first character of the last string, no matter how many strings precede the final (target) string.
So, to check my assertion that the "ordinary" functions are better than regexes, I wrote the following function (https://stackoverflow.com/a/14328164/470530) (Erwin Brandstetter to the rescue yet again - I also found this thread (<https://stackoverflow.com/questions/24938311/create-a-function-declaring-a-predefined-text-array >) helpful).
CREATE OR REPLACE FUNCTION random_pick(arr TEXT[])
RETURNS TEXT
LANGUAGE sql VOLATILE PARALLEL SAFE AS
$func$
SELECT (arr)[trunc((random() * (CARDINALITY($1)) + 1))::int];
$func$;
It returns a random element from an array.
So, I have to construct a table to test against - you can see the code in the fiddle - the last few lines are:
SELECT
random_pick((SELECT w1 FROM words_1)) || ' ' ||
random_pick((SELECT w1 FROM words_1)) || ' ' ||
random_pick((SELECT w2 FROM words_2)) AS label
FROM
GENERATE_SERIES(1,25)
)
SELECT
*,
CASE
WHEN RIGHT(SPLIT_PART(REVERSE(t.label), ' ', 1), 1) IN ('R', 'W')
THEN 'KO'
ELSE 'OK'
END AS result
FROM
t;
Result (snipped for brevity):
labelresultMouse Mouse WQWS456KOStar Cecile WQWS456KOMickey Star SQWS456OKStar Cecile RQWS456KOStar Hawk AQWS456OK
I also test to see that my records are inserted and that the data looks OK1
Now, down to the nitty-gritty! I'm using PostgreSQL 18 beta because I want to look at the new features in EXPLAIN ANALYZE VERBOSE
. Here's the output of that functionality for my query:
Seq Scan on public.lab (cost=0.00..2369.64 rows=86632 width=64) (actual time=0.013..49.298 rows=100000.00 loops=1) Output: label, CASE WHEN ("right"(split_part(reverse(label), ' '::text, 1), 1) = ANY ('{R,W}'::text[])) THEN 'KO'::text ELSE 'OK'::text END Buffers: shared hit=637Planning Time: 0.027 msExecution Time: 54.068 ms
Note: Planning Time: 0.024 ms Execution Time: 50.476 ms
The output for Guillaume's query are similar (trumpets sound) except for the last 2 lines:
Planning Time: 0.044 ms Execution Time: 150.038 ms
Over a few runs, my query takes approx. 33 - 35% of the time that his does. So, my original assertion holds true - regexes - Caveat Emptor!
You can install it easily like below (I've tried it in Fedora 42),
1. git clone https://github.com/sarim/ibus-avro.git
2. cd ibus-avro
3. sudo dnf install ibus-libs
4. sudo dnf group install development-tools
5. aclocal && autoconf && automake --add-missing
6. ./configure --prefix=/usr
7. sudo make install
Then just Press super (windows) key and search for Inuput method selector, then scroll below and click ... button for other language. Now you can search for avro or just select Bengali and select iavro.
When I moved to node 22.17.1, problem solved. Angular was installed and a new project was created.
Thank you,
Zvika
I tried answering you on mozilla but moderated new user there, so here again:
Currently not possible it seems. https://docs.google.com/document/d/1i3IA3TG00rpQ7MKlpNFYUF6EfLcV01_Cv3IYG_DjF7M/edit?tab=t.0
Problem is that it runs in a whole other process - you can copy buffer content ofc but you cannot access shared buffers from the main thread.
So even if you force enable usage its unlikely you get the same data in the buffer.
1.- u need config of issuer<processors???>
2.-u need certificate rsa from merchant<private key>
final - You need to define currency and brands <no all>
Is the issue just execution_timeout set to a low value? For me it was giving the zombie error but setting a higher value for execution_timeout solved it.
from datetime import timedelta
create_plots_task = PythonOperator(
task_id='create_plots',
python_callable=create_plots,
provide_context=True,
execution_timeout = = timedelta(minutes=120)
dag=dag,
)
Now can this be used to get into someone’s cell phone. I have a hacker who has taken over 4 emails and a bunch of my influencer accts. I believe she connects through my WiFi. Is any of this possible. I get reports daily in my file system and don’t know how to stop this. Someone please help
Thank you
Nancyb