Using Portainer, first install in linux docker portainer tool:
docker volume create portainer_data
docker run -d -p 9000:9000 \
--name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
Then you can access server web-page at localhost:9000. Go under images:

Then as you can see, explore contents of it visually.
Yes, you are right: it sounds crazy, but IT RUNS !
Update November 2025:
Seems now it's implemented much simpler way:
fname = "fasttext.model"
model.save(fname)
model = FastText.load(fname)
docs page https://radimrehurek.com/gensim/models/fasttext.html
This question can be answered from two standpoints in my opinion. The first one would be an operating system question. Which is what the other users half-heartedly pressured this question to be, by removing the C++ tag. The second would be minizip-specific.
I'll start to answer the first one, then proceed with the second one.
It completely depends on which system you use and what it understands as a file.
Generally speaking, yes a FILE* can be a memory-mapped file.
Therefore you should be able to compress/decompress it.
You will have to look into the manuals for your operating system to find out, how to create a memory-mapped file.
For UNIX-like systems, this is usually fmemopen(3). But other solutions like mapping /dev/zero could work, although I haven't personally tried that. [1][2]
For Windows, an approach exists, by creating a file descriptor, with O_SHORTLIVED and O_TEMPORARY, then using that with fdopen to obtain a FILE*.[3]
In the minizip-specific case, the library accepts a struct "zlib_file_func_def" in it's unzOpen function, where you can specify your implementations of the fopen, fread, fwrite, etc.... file functions.
You could easily create a class, that implements these functions. I'm assuming the OP's interest in in C++.
There, a class could be created to wrap these functions. Let's call it "MemoryFile". "fopen" would then create a new instance of this class, while "fclose" would call it's destructor.
To implement this class, either simple tracking could be implemented oneself. Or probably better yet, an existing class like std::ispanstream could be used. [4]
[1] What is the purpose of MAP_ANONYMOUS flag in mmap system call?
[2] Is it possible to create a C FILE object to read/write in memory
[3] https://github.com/Arryboom/fmemopen_windows/blob/master/libfmemopen.c
@Lundin Love the way you refactored the code but wonder if that's always practical or even advisable. Can you say more about why you believe "creating function-like macros to replace chunks of code or program flow control is almost always bad practice"?
@Diego I had checked some possibilities by piping various subcommands of xsv, xan, csvkit etc. Then got a long awk snippet which was technically one line and it did work. However it took more 10-15 seconds on a large dataset.
On oneliner need, this is not for a one time use or a script. I want to set an alias to the solution and want to use it like less -S or cat or so on many files on a daily basis. Also I wanted to really reach a more efficient solution.
Many thanks to @Barmar, @Shawn, @jhnc and @dawg for the suggested solutions.
The perl based solution from @jhnc do meet the requirements I wanted help with. It is way quicker than the awk snippet I had (and shorter also).
@RARE, I can agree that the in/out example do look like a homework. Getting it done without getting into a R/python/shell script was the difficult part.
Включить на столбце/строке/ячейке перенос
$sheet->getStyle('K')->getAlignment()->setWrapText(true);
Добавить в верстку шаблона тег <p><p>
Что бы в ячейке получилось что-то подобное:
<td>
@foreach($someArray as $itemVal)
<p> {{ $itemVal }} </p>
@endforeach
</td>
Depends on if you want to rely on containerization techniques vs. VM scaling - also your ability to manage one or the other:
1. Elastic Beanstalk - provisions EC2 instances, auto scales, load balances automatically, minimal DevOps overhead. Each "application" in this scenario creates its own resources.
2.ECS / Fargate - requires you to setup services/tasks and to pick if you want to manage EC2 or go serverless. This option requires some kind of Load Balancer (either your own or ALB). Needs a bit more DevOps knowledge to containerize and create task definitions.
3.Scale Manually - What you described.
Consider trade-offs: do you want to optimize for cost, simplicity, ownership, cloud lock, etc.
when you cannot find Windows menu in Eclipse:
edit eclipse.ini and add -clean to the first line. Restart and all menu entries will be recreated.
Dont forget to delete -clean after your restart
I'm having the same issue, can anyone please check this? I'm stuck after trying with multiple versions of eclipse and jdks
Gimby I think that would have worked too, I'm going to bookmark that page for future reference.
I think I would still have needed to use a PowerShell script to avoid having to manually update all of the projects, but you're idea would certainly be a more elegant way to do it.
here is the thing,
when u hv a new dir/ in ur public/, u need to relink it,
so just php artisan storage:unlink then php artisan storage:link
I read over that section but obviously I did not give it the deserved respect. argh!
Is
<some-component color="red" />
not slower because Angular has to access DOM attribute?
This is exactly what Bart (maintainer of the GPIO subsystem) wants to push upstream for v6.19-rc1. If successfully done, you will be able to achieve that.
Disable NuGet auditing temporarily?
https://learn.microsoft.com/en-us/nuget/concepts/auditing-packages#configuring-nuget-audit
I'm just spitballing here, the warnings as errors deal seemed like a consequence, not a root cause. You want NuGet to temporarily shut up while you do your upgrades in peace.
Answer was quite simple in the end. I had CoPilot write me a PowerShell script to update all the TreatWarningsAsErrors tags. I would have preferred a way to do this from withing Visual Studio, but this worked.
Get-ChildItem -Path "path to solution" -Recurse -Filter *.csproj |
ForEach-Object {
(Get-Content $_.FullName) -replace '<TreatWarningsAsErrors>true</TreatWarningsAsErrors>', '<TreatWarningsAsErrors>false</TreatWarningsAsErrors>' |
Set-Content $_.FullName
}
The problem can be it scans for wrong .xjb file which is designed for another shema.
Plugin scans all .xjb files by default. I had to put this one for plugin org.jvnet.jaxb:jaxb-maven-plugin:4.0.9:
<configuration>
...
<bindingIncludes>
<!-- DO NOT REMOVE: we skip all *.xjb -->
</bindingIncludes>
That's not an XCom limit. There's a configuration limiting the maximum number of mapped tasks: https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#max-map-length. The default value is 1024, you can configure this higher/lower as you prefer.
XComs are limited by their size in your database of choice: https://www.astronomer.io/docs/learn/2.x/airflow-passing-data-between-tasks#when-to-use-xcoms.
"@react-native/gradle-plugin": "^0.81.1",
some times you have to match the above gradle-plugin with the react-native version if you don't want to change the gradle-wrapper.properties file
There are two possible reasons
Check the target framework mentioned in project Properties -> Application -> Target Framework. Download it from the VS Installer -> Modify -> Individual Components
Check compilation Target Framework version in web.config. If it is different to the in the properties download it from the VS Installer -> Modify -> Individual Components
if you are using the gh cli https://cli.github.com/
gh repo sync your-user-name/your-repo
suppose your username is baz, you forked foo/bar into baz/bar
gh repo sync baz/bar
This issue usually happens when upgrading from WSO2 IS 7.1.0 → 7.2.0 without applying the full permission-migration steps.
WSO2 IS 7.2.0 introduces new internal role-management permissions, and existing users (including the admin user) won’t receive them automatically. As a result, SCIM operations like assigning roles return 403.
A fresh installation works because the new default roles are created with the correct permissions.
An upgraded setup needs the migration steps that update internal permissions and system roles.
These permission-migration steps are included in WSO2’s official upgrade process, but the automation/scripts required for this are only available through WSO2 subscription support. If you’re a subscriber, open a support ticket. Otherwise, you’ll need to contact WSO2 to obtain the migration utilities.
Official reference: WSO2 IS Upgrade Guide.
Update for Xcode 26.1.1 Swift 6.2.1
Now Swift has added in the @frozen keyword for some definitions in the standard library. To get the same updated information, type "Swift" in Xcode's Editor and right click on the Swift, select "Jump to Definition" as before. Select all and copy.
Open a Terminal and run the command
pbpaste | grep -E '(^@frozen public|^public)' | grep -E '(struct|protocol|class|enum)' > swiftTypes.txt
you get:
public protocol AdditiveArithmetic : Equatable {
@frozen public struct AnyBidirectionalCollection<Element> {
@frozen public struct AnyCollection<Element> {
@frozen public struct AnyHashable {
@frozen public struct AnyIndex {
@frozen public struct AnyIterator<Element> {
public class AnyKeyPath : _AppendKeyPath {
@frozen public struct AnyRandomAccessCollection<Element> {
@frozen public struct AnySequence<Element> {
@frozen public struct Array<Element> {
@frozen public struct ArraySlice<Element> {
@frozen public struct AutoreleasingUnsafeMutablePointer<Pointee> {
public protocol BidirectionalCollection<Element> : Collection where Self.Indices : BidirectionalCollection, Self.SubSequence : BidirectionalCollection {
public protocol BinaryFloatingPoint : ExpressibleByFloatLiteral, FloatingPoint {
public protocol BinaryInteger : CustomStringConvertible, Hashable, Numeric, Strideable where Self.Magnitude : BinaryInteger, Self.Magnitude == Self.Magnitude.Magnitude {
public protocol BitwiseCopyable : ~Escapable {
@frozen public struct Bool : Sendable {
@frozen public struct CVaListPointer {
public protocol CVarArg {
public protocol CaseIterable {
@frozen public struct Character : Sendable {
@frozen public struct ClosedRange<Bound> where Bound : Comparable {
public protocol CodingKey : CustomDebugStringConvertible, CustomStringConvertible, Sendable {
public protocol CodingKeyRepresentable {
public struct CodingUserInfoKey : RawRepresentable, Equatable, Hashable, Sendable {
public protocol Collection<Element> : Sequence {
public struct CollectionDifference<ChangeElement> {
@frozen public struct CollectionOfOne<Element> {
@frozen public enum CommandLine : ~BitwiseCopyable {
public protocol Comparable : Equatable {
@frozen public struct ContiguousArray<Element> {
public protocol Copyable {
public protocol CustomDebugStringConvertible {
public protocol CustomLeafReflectable : CustomReflectable {
public protocol CustomPlaygroundDisplayConvertible {
public protocol CustomReflectable {
public protocol CustomStringConvertible {
public protocol Decodable {
public protocol Decoder {
public enum DecodingError : Error {
@frozen public struct DefaultIndices<Elements> where Elements : Collection {
@frozen public struct DefaultStringInterpolation : StringInterpolationProtocol, Sendable {
@frozen public struct Dictionary<Key, Value> where Key : Hashable {
public struct DiscontiguousSlice<Base> where Base : Collection {
@frozen public struct Double {
@frozen public struct DropFirstSequence<Base> where Base : Sequence {
@frozen public struct DropWhileSequence<Base> where Base : Sequence {
@frozen public struct Duration : Sendable {
public protocol DurationProtocol : AdditiveArithmetic, Comparable, Sendable {
@frozen public struct EmptyCollection<Element> {
public protocol Encodable {
public protocol Encoder {
public enum EncodingError : Error {
@frozen public struct EnumeratedSequence<Base> where Base : Sequence {
public protocol Equatable {
public protocol Error : Sendable {
public protocol Escapable {
public protocol ExpressibleByArrayLiteral {
public protocol ExpressibleByBooleanLiteral {
public protocol ExpressibleByDictionaryLiteral {
public protocol ExpressibleByExtendedGraphemeClusterLiteral : ExpressibleByUnicodeScalarLiteral {
public protocol ExpressibleByFloatLiteral {
public protocol ExpressibleByIntegerLiteral {
public protocol ExpressibleByNilLiteral : ~Copyable, ~Escapable {
public protocol ExpressibleByStringInterpolation : ExpressibleByStringLiteral {
public protocol ExpressibleByStringLiteral : ExpressibleByExtendedGraphemeClusterLiteral {
public protocol ExpressibleByUnicodeScalarLiteral {
public protocol FixedWidthInteger : BinaryInteger, LosslessStringConvertible where Self.Magnitude : FixedWidthInteger, Self.Magnitude : UnsignedInteger, Self.Stride : FixedWidthInteger, Self.Stride : SignedInteger {
@frozen public struct FlattenSequence<Base> where Base : Sequence, Base.Element : Sequence {
@frozen public struct Float {
@frozen public struct Float16 {
public protocol FloatingPoint : Hashable, SignedNumeric, Strideable where Self == Self.Magnitude {
@frozen public enum FloatingPointClassification : Sendable {
public enum FloatingPointRoundingRule : Sendable {
@frozen public enum FloatingPointSign : Int, Sendable {
public protocol Hashable : Equatable {
@frozen public struct Hasher {
public protocol Identifiable<ID> {
@frozen public struct IndexingIterator<Elements> where Elements : Collection {
@frozen public struct InlineArray<let count : Int, Element> : ~Copyable where Element : ~Copyable {
public protocol InstantProtocol<Duration> : Comparable, Hashable, Sendable {
@frozen public struct Int : FixedWidthInteger, SignedInteger {
@frozen public struct Int128 : Sendable {
@frozen public struct Int16 : FixedWidthInteger, SignedInteger {
@frozen public struct Int32 : FixedWidthInteger, SignedInteger {
@frozen public struct Int64 : FixedWidthInteger, SignedInteger {
@frozen public struct Int8 : FixedWidthInteger, SignedInteger {
public protocol IteratorProtocol<Element> {
@frozen public struct IteratorSequence<Base> where Base : IteratorProtocol {
@frozen public struct JoinedSequence<Base> where Base : Sequence, Base.Element : Sequence {
public class KeyPath<Root, Value> : PartialKeyPath<Root> {
@frozen public struct KeyValuePairs<Key, Value> : ExpressibleByDictionaryLiteral {
public struct KeyedDecodingContainer<K> : KeyedDecodingContainerProtocol where K : CodingKey {
public protocol KeyedDecodingContainerProtocol {
public struct KeyedEncodingContainer<K> : KeyedEncodingContainerProtocol where K : CodingKey {
public protocol KeyedEncodingContainerProtocol {
public protocol LazyCollectionProtocol : Collection, LazySequenceProtocol where Self.Elements : Collection {
@frozen public struct LazyDropWhileSequence<Base> where Base : Sequence {
@frozen public struct LazyFilterSequence<Base> where Base : Sequence {
@frozen public struct LazyMapSequence<Base, Element> where Base : Sequence {
@frozen public struct LazyPrefixWhileSequence<Base> where Base : Sequence {
@frozen public struct LazySequence<Base> where Base : Sequence {
public protocol LazySequenceProtocol : Sequence {
public protocol LosslessStringConvertible : CustomStringConvertible {
@frozen public struct ManagedBufferPointer<Header, Element> : Copyable where Element : ~Copyable {
@frozen public enum MemoryLayout<T> : ~BitwiseCopyable, Copyable, Escapable where T : ~Copyable, T : ~Escapable {
public struct Mirror {
public protocol MirrorPath {
public protocol MutableCollection<Element> : Collection where Self.SubSequence : MutableCollection {
@frozen public struct MutableRawSpan : ~Copyable & ~Escapable {
@frozen public struct MutableSpan<Element> : ~Copyable, ~Escapable where Element : ~Copyable {
@frozen public enum Never {
public protocol Numeric : AdditiveArithmetic, ExpressibleByIntegerLiteral {
@frozen public struct ObjectIdentifier : Sendable {
@frozen public struct OpaquePointer {
public protocol OptionSet : RawRepresentable, SetAlgebra {
@frozen public enum Optional<Wrapped> : ~Copyable, ~Escapable where Wrapped : ~Copyable, Wrapped : ~Escapable {
@frozen public struct OutputRawSpan : ~Copyable, ~Escapable {
@frozen public struct OutputSpan<Element> : ~Copyable, ~Escapable where Element : ~Copyable {
public class PartialKeyPath<Root> : AnyKeyPath {
@frozen public struct PartialRangeFrom<Bound> where Bound : Comparable {
@frozen public struct PartialRangeThrough<Bound> where Bound : Comparable {
@frozen public struct PartialRangeUpTo<Bound> where Bound : Comparable {
@frozen public struct PrefixSequence<Base> where Base : Sequence {
public protocol RandomAccessCollection<Element> : BidirectionalCollection where Self.Indices : RandomAccessCollection, Self.SubSequence : RandomAccessCollection {
public protocol RandomNumberGenerator {
@frozen public struct Range<Bound> where Bound : Comparable {
public protocol RangeExpression<Bound> {
public protocol RangeReplaceableCollection<Element> : Collection where Self.SubSequence : RangeReplaceableCollection {
public struct RangeSet<Bound> where Bound : Comparable {
public protocol RawRepresentable<RawValue> {
@frozen public struct RawSpan : ~Escapable, Copyable, BitwiseCopyable {
public class ReferenceWritableKeyPath<Root, Value> : WritableKeyPath<Root, Value> {
@frozen public struct Repeated<Element> {
@frozen public enum Result<Success, Failure> where Failure : Error, Success : ~Copyable, Success : ~Escapable {
@frozen public struct ReversedCollection<Base> where Base : BidirectionalCollection {
public protocol SIMD<Scalar> : CustomStringConvertible, Decodable, Encodable, ExpressibleByArrayLiteral, Hashable, SIMDStorage {
@frozen public struct SIMD16<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMD2<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMD3<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMD32<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMD4<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMD64<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMD8<Scalar> : SIMD where Scalar : SIMDScalar {
@frozen public struct SIMDMask<Storage> : SIMD where Storage : SIMD, Storage.Scalar : FixedWidthInteger, Storage.Scalar : SignedInteger {
public protocol SIMDScalar : BitwiseCopyable {
public protocol SIMDStorage {
public protocol Sendable : SendableMetatype {
public protocol SendableMetatype : ~Copyable, ~Escapable {
public protocol Sequence<Element> {
@frozen public struct Set<Element> where Element : Hashable {
public protocol SetAlgebra<Element> : Equatable, ExpressibleByArrayLiteral {
public protocol SignedInteger : BinaryInteger, SignedNumeric {
public protocol SignedNumeric : Numeric {
public protocol SingleValueDecodingContainer {
public protocol SingleValueEncodingContainer {
@frozen public struct Slice<Base> where Base : Collection {
@frozen public struct Span<Element> : ~Escapable, Copyable, BitwiseCopyable where Element : ~Copyable {
@frozen public struct StaticBigInt : ExpressibleByIntegerLiteral, Sendable {
@frozen public struct StaticString : Sendable {
@frozen public struct StrideThrough<Element> where Element : Strideable {
@frozen public struct StrideThroughIterator<Element> where Element : Strideable {
@frozen public struct StrideTo<Element> where Element : Strideable {
@frozen public struct StrideToIterator<Element> where Element : Strideable {
public protocol Strideable<Stride> : Comparable {
@frozen public struct String {
public protocol StringInterpolationProtocol {
public protocol StringProtocol : BidirectionalCollection, Comparable, ExpressibleByStringInterpolation, Hashable, LosslessStringConvertible, TextOutputStream, TextOutputStreamable where Self.Element == Character, Self.Index == String.Index, Self.StringInterpolation == DefaultStringInterpolation, Self.SubSequence : StringProtocol {
@frozen public struct Substring : Sendable {
@frozen public struct SystemRandomNumberGenerator : RandomNumberGenerator, Sendable {
public protocol TextOutputStream {
public protocol TextOutputStreamable {
@frozen public struct UInt : FixedWidthInteger, UnsignedInteger {
@frozen public struct UInt128 : Sendable {
@frozen public struct UInt16 : FixedWidthInteger, UnsignedInteger {
@frozen public struct UInt32 : FixedWidthInteger, UnsignedInteger {
@frozen public struct UInt64 : FixedWidthInteger, UnsignedInteger {
@frozen public struct UInt8 : FixedWidthInteger, UnsignedInteger {
@frozen public struct UTF8Span : Copyable, ~Escapable, BitwiseCopyable {
@frozen public enum UnboundedRange_ {
@frozen public struct UnfoldSequence<Element, State> : Sequence, IteratorProtocol {
@frozen public enum Unicode : ~BitwiseCopyable {
public protocol UnicodeCodec : _UnicodeEncoding {
@frozen public enum UnicodeDecodingResult : Equatable, Sendable {
public protocol UnkeyedDecodingContainer {
public protocol UnkeyedEncodingContainer {
@frozen public struct Unmanaged<Instance> where Instance : AnyObject {
@frozen public struct UnsafeBufferPointer<Element> : Copyable where Element : ~Copyable {
@frozen public struct UnsafeMutableBufferPointer<Element> : Copyable where Element : ~Copyable {
@frozen public struct UnsafeMutablePointer<Pointee> : Copyable where Pointee : ~Copyable {
@frozen public struct UnsafeMutableRawBufferPointer {
@frozen public struct UnsafeMutableRawPointer {
@frozen public struct UnsafePointer<Pointee> : Copyable where Pointee : ~Copyable {
@frozen public struct UnsafeRawBufferPointer {
@frozen public struct UnsafeRawPointer {
public protocol UnsafeSendable : Sendable {
public protocol UnsignedInteger : BinaryInteger {
public class WritableKeyPath<Root, Value> : KeyPath<Root, Value> {
@frozen public struct Zip2Sequence<Sequence1, Sequence2> where Sequence1 : Sequence, Sequence2 : Sequence {
public protocol _AppendKeyPath {
you’re basically fighting 2 separate things here:
how do i save multiple models at once? (backend)
how do i not end up with a monster react component? (frontend)
1) backend – multiple models “in one go”
inertia vs livewire doesn’t matter here. once the request hits laravel it’s just:
“i got some nested data, i need to create/update a few models safely”.
the usual pattern:
send one payload that contains everything (invoice + client + provider + items)
validate it with nested rules (or a form request)
in the controller, wrap the whole thing in a DB::transaction()
so either all models are saved, or nothing is saved if something blows up
create/update: client, provider, invoice, then invoice items
so from laravel’s point of view it’s still just one form submit, just writing to 3–4 tables instead of 1. nothing inertia-specific about it.
2) frontend – avoid the god-component
this is where inertia + react shines if you structure it right.
core idea:
keep one single form state (invoice + client + provider + items) in your page or main form component
break the UI into small “dumb” sections:
<ClientSection />
<ProviderSection />
<InvoiceMetaSection />
<ItemsSection />
those sections don’t own state, they just receive value + onChange props
the “page” (or main form) is the only one that knows the full shape of the data and is the only place that submits
practically: you still have one form, one submit, one inertia post(), but visually and in code it feels like 3–4 smaller, focused components instead of one giant soup.
mental model that keeps things simple
treat the invoice + client + provider as one logical resource in the UI
(one screen, one save button, one request)
treat them as separate models only in the database layer
front: composition + one global state
back: validation + transaction
doesn’t matter if you’re saving 1 model or 5 — same flow, just slightly more fields.
This turned out to be a bug in drools, and will be addressed in version 10.2.0.
https://kie.zulipchat.com/#narrow/channel/232677-drools/topic/Strange.20behaviour.20with.20traditional.20syntax.20in.20RuleUnit/with/558621566
i think the problem is way more Conceptual
Consider this case from leetcode ( naive solution fails on this one)
# Source - https://stackoverflow.com/q/72233959
# Diagram Copies for the post above
2
/ \
NULL 3
/ \
NULL 4
/ \
NULL 5
/ \
NULL 6
we misunderstood the Meaning of actual depth in this case according to problem statement on leetcode
"The minimum depth is the number of nodes along the shortest path from the root node down to the nearest leaf node."
Note: A leaf is a node with no children.
Now understand the explicit meaning leaf node its is a node which has 0 children and since our root node( see diagram) has one children ( right children node with value 3 so it is not a leaf node) and that is where confussion kickin in our naive code we usually do Base case like this
if not node : return 0;
we consider it as leaf node but it does not guartantee its a leaf node it just means if any childern left or right is NULL but actual parent node may have other children ( right or vice versa)
while doing this we are unaware of that fact that we call a node if both its parent.left && parent.right are null
so in order to correctly identify a node as leaf we must check both of it left and right pointer only then we say its a leaf node
if( ! root) return 0; // it check whether root node even exists
// then we check if its LEAF NODE
if( root->left== NULL && root->right==NULL) return 1; // LEAF NODE no children
// other wise we have a may case i whicj one child may be null but now its not a leaf node and we do not //stop recursion( DFS)
inr left_height = depth(root->left);
int right_height = depth(root->right);
// now we have depth of both
if( root->left==NULL) return 1+ right;
if(root->right == NULL) return 1+ left;
// if both children of root nodes exists take min
return min(left_height,right_height) +1;
It takes 24h+ before you'll see the 403s turn into 200s. This is because Google only adds your service account to the bucket read permissions at the next report generation moment.
Simply reinstall git worked for me
The error means the server is returning an invalid or malformed Access-Control-Allow-Headers value in the OPTIONS (preflight) response.
Even a small issue like a trailing comma, duplicate header, or empty value will cause the browser to reject it.
What to check:
Inspect the failing OPTIONS response in DevTools and compare with Dev.
Look for:
✔ trailing commas
✔ blank header values
✔ duplicated Access-Control-Allow-Headers
✔ different headers added by WSO2 / Okta / Apache
It’s almost always a formatting issue in the CORS headers on that environment.
Oops, apologies I did post this incorrectly. Thank you for your response. I will give it a try.
I found
SELECT * FROM vector_store WHERE metadata->>'service' = ?
to be working in Spring. Metadata, however, will never be an array, but may be nested.
Add following snippet in the cloudhub2Deployment element in the pom.xml. This will enable the Object Store V2 on cloudhub for the application.
<integrations>
<services>
<objectStoreV2>
<enabled>true</enabled>
</objectStoreV2>
</services>
</integrations>
I noticed that tsconfig.json not includes the "../server/**/*" at ./.nuxt/tsconfig.json file
{
// https://nuxt.com/docs/guide/concepts/typescript
"extends": "./.nuxt/tsconfig.json"
}
and this problem is fix at 4.2.1 or we can handle it by myself to change tsconfig.json.And this is also the solution for version 4.2.1.
{
"references": [
{
"path": "./.nuxt/tsconfig.app.json"
},
{
"path": "./.nuxt/tsconfig.server.json"
},
{
"path": "./.nuxt/tsconfig.shared.json"
},
{
"path": "./.nuxt/tsconfig.node.json"
}
],
"files": []
}
I ended up dreaming big and going a step further, instead of just pointing browsers at my remote dnscrypt-proxy DoH endpoint, i ended up wanting system level DNS redirection back like i did with local dnscrypt-proxy instances on Android (ads mostly) and Windows (ads and telemetry). So i navigated the world of creating a DNS stamp for my remote dnscrypt-proxy, which took a lot of fumbling as each stamp i generated would error out, until finally i got it right.
Only thing i have now is install a smaller/simpler dnscrypt-proxy magisk module on Android, and simpler dnscrypt-proxy setup on Windows that both upstream to my remote instance. System level blocking with centralised management of block lists is a wonderful thing....
I hope to wipe and recreate the server from scratch and provide an updated script to the one posted earlier soon
Thanks for such a brief answer appreciated !
FastAPI expects the body of a POST request to follow a structure.
If you send just a plain string like:
"hello world"
FastAPI cannot parse it unless you tell it exactly how to treat that string.
So it returns something like:
{"detail":"There was an error parsing the body"}
or:
{"detail":[{"type":"string_type","msg":"str type expected"}]}
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class TextIn(BaseModel):
text: str
@app.post("/predict")
def predict(data: TextIn):
input_text = data.text
# your ML model prediction here
result = my_model.predict([input_text])
return {"prediction": result}
{
"text": "This is my input sentence"
}
Great solution! 👏 This approach with GitHub Actions and Sideloadly is a clever workaround for developers without a Mac or a paid Apple Developer account. Using a free Apple ID certificate allows you to sideload apps to an iPhone without needing the full developer program.
For those who want more information on the details of using GitHub Actions with macOS runners, you can check out the GitHub Actions documentation for further configuration.
If you're also looking for web development or custom app solutions, feel free to visit Idea Maker for more info on services offered.
This is a great method to get your app running on iOS during development without the need for a Mac or a paid account!
You are missing one edge case (the statement is not true for n = 0, which probably you don't care). This my attempt at proving the theorem using Nat.mod.inductionOn.
In my case (Flutter 3.35 on macOS), it was not a Firebase outage. It was a combination of package version and macOS network permissions / entitlements.
First, make sure you’re on a recent Flutter and Firebase version:
flutter upgrade
flutter pub upgrade
And in pubspec.yaml use the latest versions of:
firebase_core: ^latest
cloud_firestore: ^latest
Then:
flutter pub get
macos/Runner/Info.plistIn macos/Runner/Info.plist, add:
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
Both of these files need network client permission:
macos/Runner/Release.entitlements
macos/Runner/DebugProfile.entitlements
Add:
<key>com.apple.security.network.client</key>
<true/>
Finally, clean and rebuild:
flutter clean
rm -rf macos/Pods macos/Podfile.lock
pod install
flutter run
Hope this helps someone facing the same issue Thanks
Key is storing the video frames from the past couple of seconds, i.e. in a ring buffer. Once you have detected a distinct playing card, apply block motion detection backwards. You should get a lot of redundant motion vectors (one is sufficient to tell the origin), so filter them and you are able to retrieve the original direction of the card.
Thanks @MTO
Below part of your answer itself work for converting the date into the expected format without using insert command
i tried this way
~~~
create table basetable as
with test_table as (select * from basetable)
SELECT t.*, date_str,
TO_DATE(date_str DEFAULT NULL ON CONVERSION ERROR, 'DD/MM/RR') as formatted_date
from test_table t
~~~
this worked as expected
import React, { useState, useEffect, useRef } from 'react'; import { Crosshair, Heart, Zap, Target } from 'lucide-react';
const FreeFireBattleGame = () => {
const canvasRef = useRef(null);
const [gameState, setGameState] = useState('menu'); // menu, playing, gameover
const [player, setPlayer] = useState({ x: 400, y: 300, health: 100, ammo: 30 });
const [enemies, setEnemies] = useState([]);
const [bullets, setBullets] = useState([]);
const [score, setScore] = useState(0);
const [keys, setKeys] = useState({});
In addition to the solution of @AHaworth and the explanation of the sizing behavior in the answer of @JohnBollinger, I've found in the meantime a different solution by using
grid-template-rows: repeat(2, minmax(min-content, 0px)) repeat(2, min-content)
instead of
grid-template-rows: repeat(4, min-content)
On MDN it says "If max < min, then max is ignored and minmax(min,max) is treated as min." Thus minmax(min-content, 0px) should be equal to min-content, but it seems that for the track-sizing algorithm it is now treated as fixed size instead of intrinsic size. In any case, it works, as one can see in the following snippet:
html, body {
height: 100vh;
max-height: 100vh;
margin: 0px;
}
/* Grid-Container */
#container {
display:grid;
grid-template-areas:
"v p"
"v o"
"v t"
"m t";
grid-template-columns: 1fr min-content;
grid-template-rows: repeat(2, minmax(min-content, 0px)) repeat(2, min-content);
gap: 4px;
width:100%;
}
/* Grid-Items */
div.grid-item {
border-color: black;
border-width: 2px;
border-style: solid;
position: relative;
}
#video {
background-color: green;
height: 180px;
grid-area: v;
}
#metadata {
background-color: yellow;
height: 30px;
grid-area: m;
}
#previewSelect {
background-color: red;
height:30px;
grid-area: p;
}
#transcript {
background-color: blue;
align-self: stretch;
grid-area: t;
}
#optional {
height: 30px;
grid-area: o;
}
<html>
<body>
<div id="container" class="l1">
<div id="video" class="grid-item">Video</div>
<div id="metadata" class="grid-item">Metadata</div>
<div id="previewSelect" class="grid-item">Preview-Select</div>
<div id="transcript" class="grid-item">Transcript</div>
<div id="optional" class="grid-item">Optional</div>
</div>
</body>
</html>
The problem was not running the entire script with DBeaver. You have to click the "Execute SQL Script" (The third form the top) button. The "Execute SQL Query" (top button) is not sufficient enough.

Do you want to expose the fact that the string ends with \0?
My solution was to add this to application.yaml
kafka:
producer:
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
properties:
avro.remove.java.properties: true
This makes KafkaAvroSerializer properly strip the type object down to just "string"
like 32/64 bit problem. From launcher.library I see you are using 32 bit eclipse. From the Java path it looks like 64-bit. You can confirm Java version by running Java -version.
Try with 32-bit Java or get a 64-bit Eclipse.
My assumption here is your OS is 64-bit.
You have to store the cart details locally in app data and publish
Remove cart details if order paid success
So it will be realtime
If you exit the app
And reopen it get the cart details
I'm using this plugin, and I was facing the same issue.
After some trial and error I just ran :Dotnet _server update and dotnet tool install -g EasyDotnet and the issue was fixed.
To be fair, I don't know exactly what is happening but maybe you could give that plugin a try and use the built-in roslyn and maybe run those comments, you can check its doc as well.
I was faced with the task of sorting data by one of two dates depending on the unit's status.
var date = DateTime.UtcNow;
items = items.OrderBy(x => x.StatusId != 1)
.ThenBy(x => x.StatusId == 1 ? x.Date : date)
.ThenByDescending(x => x.Created);
@Jillian Hoenig
We have now fixed the tutorial to use wp-env instead of wp-now - https://faustjs.org/docs/tutorial/learn-faust/
Thank you so much for letting us know about the issues you were having and hopefully this will fix any issues.
This is 100% a client-side state-reset issue, not a browser caching issue.
You MUST reset Redux store on logout funtionality.
Add like this inside clearAuthData():
dispatch(resetCourses());
dispatch(resetUserSlice());
dispatch(resetFriendsSlice());
have you tried the unpackDirName option mentioned here: https://www.electron.build/app-builder-lib.interface.portableoptions
According to the docs that would allow the app to use the same folder in temp for its files and hopefully that could let it find its files without unpacking everything on startup.
I have created a package django-superset-integration to embed an Apache Superset dashboard in a Django app : https://pypi.org/project/django-superset-integration/
(it can still be improved)
My Github Action steps for achieving the above an how I'm using it for my Supabase changes deployment:
\.txt(.*)\n
will match everything between .txt and the line break.
the . needs to be escaped with \ since its part of the regex syntax.
In fact the problem is not related to the library or the project. It is related to the fact that when doing npm install from local project, it will create a symlink from your node_modules to the local library. But it seems that Angular ng serve command doesn't work finely with this. If you do npm install PATH_TO_LOCAL_LIBRARY_DIST --install-links then it will works as expected.
Another way if you only care about the name and not the numeration part
public class SortOrder
{
public const string Newest = "newest";
public const string Rating = "rating";
public const string Relevance = "relevance";
}
SortOrder.Newest -> newest
nameof(SortOrder.Newest) -> Newest
Using Eclipse and Junit, I got a similar error message if the test function lacks parentheses.
Note that your example class lacks a closing bracket '}'.
Adding a closing bracket could help you solve this issue.
If you have still a similar error, I would advise to check:
is the testing library in modulepath or classpath ? Junit has to be in classpath
Are the imports correct according to the documentation ?
Is the unit test file in a package of the same name than the class tested ?
Is the package of the tested class imported ? Ex.: package myClass
alright here’s the thing — your ScrollTrigger isn’t broken, it’s just not dying when you leave the page.
on local dev everything constantly reloads so triggers get reset.
in production? nope. they survive. like cockroaches.
so the first time you load / it works.
you navigate away, come back, the old trigger is still pinned somewhere in gsap limbo, new trigger tries to run → boom, nothing happens.
kill the old trigger before making a new one
kill timeline + trigger properly on cleanup (return)
stop trusting revertOnUpdate to magically fix it — it won't
optionally turn off pinReparent, it causes visual chaos inside react
once you reset gsap manually, the animation works again every time you come back to home page — production included.
useGSAP(() => {
if (pathname !== '/') return;
const el = sectionRef.current;
if (!el) return;
ScrollTrigger.getById('process-section-pin')?.kill(); // kill ghosts
const q = gsap.utils.selector(el);
const isMobile = window.innerWidth < 768;
const scrollDistance = isMobile ? 1500 : 2000;
const tl = gsap.timeline()
.to({}, { duration: 0.4 })
.to(q('.slide-0'), { top: '100%', duration: 0.25, ease: 'power2.inOut' })
.to(q('.slide-1'), { top: '0%', duration: 0.25, ease: 'power2.inOut' }, '<')
.to({}, { duration: 0.4 })
.to(q('.slide-1'), { top: '100%', duration: 0.25, ease: 'power2.inOut' })
.to(q('.slide-2'), { top: '0%', duration: 0.25, ease: 'power2.inOut' }, '<')
.to({}, { duration: 0.4 });
const trigger = ScrollTrigger.create({
id: 'process-section-pin',
trigger: el,
start: 'top top',
end: `+=${scrollDistance}`,
scrub: 0.5,
pin: true,
pinSpacing: true,
animation: tl,
invalidateOnRefresh: true,
});
const onResize = () => trigger.refresh();
window.addEventListener('resize', onResize);
return () => {
window.removeEventListener('resize', onResize);
trigger.kill();
tl.kill();
};
}, { dependencies:[pathname], scope:sectionRef });
Adding "log4j-core-2.22.1.jar" to "Annotation Processing | Factory Path" is not enough:
Also "log4j-api-2.22.1.jar" must be added.
Is there a way I can use a date from a worksheet cell..example dat = cell("H2")? In order to get the date? I am not sure how else to explain it. Other than I receive the Sched workbook (Sched 11.30.25). I create a workbook for picks from that schedule to add to it (Shift Pickup 11.30.25). On 11/24 I receive another weeks schedule (Sched 12.07.25). Again I create another workbook (Shift Pickup 12.07.25). The date is the only thing that changes on the workbook titles. So like today WB Shift Pickup 11.30.25 was created with dat =Date - Weekday(Now(), 1) + 15, and today I had to go in and change it in that specific workbook to dat=Date-Weekday(Now(),1)+8 while the other workbook remains at +15. As we move into next week I will have to go into that specific workbook and change 15 to 8. Hopefully that explains it better. I am looking for a way to make it read and tell the difference between which workbook to look at. This is also used to open and close two other workbooks as I input names into the Shift Pick (Date).
Thanks to @kikon for the insight. The issue is indeed geometric: trying to center labels on the edge of the container (where `left: 0` and `right: 0`) will inevitably cause them to be cut off.
There are two ways to solve this, depending on your ECharts version.
### Solution 1: The Modern Way (ECharts v5.5.0+)
If you can upgrade to version 5.5.0 or later, ECharts introduced specific properties to handle exactly this scenario: `alignMinLabel` and `alignMaxLabel`.
This allows the first label to be left-aligned and the last label to be right-aligned automatically, keeping the middle ones centered.
xAxis: {
// ... other configs
axisLabel: {
// Force the first label to align left (inside the chart)
alignMinLabel: 'left',
// Force the last label to align right (inside the chart)
alignMaxLabel: 'right',
align: 'center', // All other labels remain centered
// ...
}
}
### Solution 2: The Manual Way (For older versions)
Since I am currently on **v5.4.2** and cannot upgrade immediately, the workaround is to manually set the `align` property using a callback function. This achieves a similar effect without using `padding` (which messes up the equidistant spacing).
xAxis: {
// ...
axisLabel: {
// ...
align: function(value, index) {
// Assuming you know the index or value of your start/end points
if (index === 0) return 'left';
if (index === data.length - 1) return 'right';
return 'center';
}
}
}
This ensures the labels are fully visible inside the container without distorting the visual spacing between ticks.
This issue is caused due to flutter version conflict with getx version,
Flutter (Channel stable, 3.38.2,
and get: ^4.7.3,,
before flutter 3.38.2 update getx snackbar was working properly,
when i downgrade the flutter version to flutter 3.35.6 it start working again
When you first time adding a workflow to a GitHub repo from a feature branch, if you want to test it before merge to main you must give it push trigger, without that - as you said - it won't appear in the UI.
How do you solve the empty values? when you call them you must provide a default with the value you want, for example: echo "${{ inputs.actions || 'create' }}".
After you finish your tests you can remove the push trigger and merge to main, only when the workflow is in main you can use the workflow_dispatch trigger, and now - even in side branches.
If you don't want to test with push and default values, you can merge it to main and then conitnue test in a feature branch.
Why is like this? I have no idea... but this is how GitHub Actions works...
crobe is an Indian unit btw, that people in other countries do not understand.
I've same issue in v18.0.2. But i'm using AppDelegate.
My issue is only user authorize open Facebook App, user authorize in WebView is fine.
I research and tried a lot but didn't found any solution. Function func application(... open url: URL, ...) -> Bool in AppDelegate recieved callback and i call return with this return ApplicationDelegate.shared.application(app, open: url, options: options), but the webview still there
I believe you're missing the playlist parameter, which needs to be set to the video id, as shown in @C3roe's answer.
Note the added &playlist=1u17pS4_tw0, I've just tested it and it should work! :)
<iframe width="560" height="315" src="https://www.youtube.com/embed/1u17pS4_tw0?si=V06Ss5eD89_PBShj&controls=1&autoplay=0&rel=0&modestbranding=1&mute=1&loop=1&playlist=1u17pS4_tw0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
The most closest extension that I can find is Peek Imports that allows you to open a peek window with some key binding (default is Ctrl+I) showing import statements at the very top of the JavaScript/TypeScript file. I just installed it too hence not much comments to share yet.
Yes , ~1.2 minutes to read ~10M rows into Python is normal, and you’re already close to the practical limits. At that scale, the bottleneck isn’t the database but the combination of network transfer + Python needing to allocate millions of objects.
You can shave some time off with driver tweaks (server-side cursor, larger fetch sizes, binary protocol), but you won’t get a 5×–10× speedup unless you change the data handling model entirely (e.g., Arrow/Polars/NumPy to avoid Python object creation).
I think this is related to sharding the data into chunks and delegate it to process with CPU cores or Memory available. As you mentioned it would related with Multiprocessing and Batching strategy topic.
With Veo 3, you can produce professional-level videos even without technical skills. It is fast, easy to use, and perfect for social media content, marketing, and creative projects.
Did you able to do that please reply how to do if you found the solution I'm currently facing the same issue
Generate hash for each binary file, then one hash of all hashes. Plus PGP/GPG signature(s).
Thanks for your question. The chipset(s) you’re referring to are actually best supported through our support portal here. Please allow us to assist you better by raising the issue there.
You can also contact the Qualcomm Sales team or your local Distributor for additional help.
Any reason you are reading it into python and not executing in the DB?
Run from 32 bits:
Result:
1.8.0_201
8192
Yes it is possible to do this with CSS since 2023:
table:not(:has(tr))
See https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/Selectors/:has
This problem often arises in Universal Windows Platform (UWP) development when trying to use a CompositionEffectBrush directly as the fill for a CompositionSpriteShape.
To achieve an irregular-shaped blur:
CompositionSpriteShape can only be filled with a CompositionColorBrush or a CompositionSurfaceBrush (e.g., loaded from a file or created via CompositionDrawingSurface). It cannot directly use a CompositionEffectBrush.
You must bind the shape to a CompositionContainerShape that is parented by a CompositionVisual (like a SpriteVisual Bazoocam).
Apply the CompositionEffectBrush to the SpriteVisual.Brush property instead.
Use the CompositionSpriteShape (with an opaque fill like a CompositionColorBrush ) as the mask by applying the CompositionContainerShape to the SpriteVisual.Clip property or, more robustly, by using a CompositionGeometricClip.
Thank you very much for the answers. This PDF is offered for all those who will work with this type of application. It is not made by me, but only MANDATORY used by us. The manual version will probably work at some point, and I will only be able to use XDP to fill out the PDF as long as I do not have any electronic signature on it. Correct?
I've got an answer from Jasper support:
Cloud Software Group has directed Jaspersoft to cease all business with Russian-occupied territories in Ukraine. Due to the fact that CSG does not have the resources to differentiate regions in Ukraine, a business decision was made to cease providing software to all of Ukraine until further notice.
So, there is only way fro Ukrainians to set up JasperStudio 6.21.5+(alternative method will show up) and get password using alternative method throw VPN.
It is unfair enough but we must accept such situation
A 401 User not found error from OpenRouter usually doesn’t mean the API key is wrong — it often happens when the server’s IP is blocked or flagged.
Since:
The same key works on your local machine
The same key fails even with a raw curl from your production server
The env variable is correct
…the most likely cause is that OpenRouter has blocked or limited your Azure VM’s public IP (this is common with cloud provider IP ranges due to abuse protection).
Contact OpenRouter support and give them your server’s public IP.
They can check and unblock it.
If you want to confirm it’s IP-related:
Try calling the API from another server or via a different outbound IP.
If it works there, the IP is 100% the issue.
There’s no problem with your API key or your code — it’s almost certainly an IP reputation block.
The most standard workaround for passing a dynamic list to an IN clause in systems that don't support array-type bind parameters (which includes current versions of Doris) is string interpolation to construct the SQL.
While you noted it's not ideal for safety, you must ensure the list of IDs is fully sanitized and cast to the expected type (e.g., all integers) before being interpolated into the query string. Alternatively, you could use a Doris function like ARRAY_CONTAINS on a temporary string or array column that stores the ID list, Bazoocam, but this typically involves more complex logic and potential performance trade-offs compared to safe string building for the IN clause.
Needed the same, but could not find anything. Wrote my own:
https://github.com/bmshouse/metriclimiter
An alternative is to re-render the Trix content after saving. I built a simple package to solve this problem that can properly render YouTube videos, Tweets, code blocks, and images from Trix, here's the link in case it's useful for future travelers: https://github.com/Justintime50/trix-tools.
Adding a Vimeo plugin would be a great addition!
There isn't a feature like that for QCalendarWidget. There is an instruction video on how to develop that feature yourself here: https://www.youtube.com/watch?v=At0JMC0rVfg
i am not sure if you can help, the website connects to phantom and they can send me solana, i do get solana to my wallet, but the server fails to send the token to the recieved address.
i also have a game on the website when they play and earn token, they claim reward , the server send the reward fine.
// Source - https://stackoverflow.com/q/79831318
// Posted by Baangla Deshi, modified by community. See post 'Timeline' for change history
// Retrieved 2025-11-27, License - CC BY-SA 4.0
<div id="google_translate_element"></div>
<script type="text/javascript">
document.addEventListener('DOMContentLoaded', function() {
// Get a reference to the code element
const codeElement = document.getElementById('myCodeElement');
// Change the translate attribute to "yes"
if (codeElement) {
codeElement.setAttribute('translate', 'yes');
console.log("Translate attribute changed to 'yes'.");
} else {
console.log("Element with ID 'myCodeElement' not found.");
}
function googleTranslateElementInit() {
new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element');
}
Do you have a good reason for continuing to use passwords, though? It might be possible for you to go passwordless by using AWS's Kerberos service to allow application/service/database mutual authentication without needing passwords - and therefore - without needing password rotation.
The answer is rather simple actually; when I was checking for intersections in triangle_triangle_partition, I was not filtering out duplicates. Now that I am, everything works as expected. The corrected function:
--- returns the partitio of the first triangle into three subtriangles,
--- if it intersects the second, otherwise produces nil
--- @param T1 table<table<number>> the first triangle
--- @param T2 table<table<number>> the second triangle
--- @return table<table<table<number>>,table<table<number>>,table<table<number>>> table of sub triangles
local function triangle_triangle_partition(T1, T2)
local I = triangle_triangle_intersections(T1, T2)
if I == nil then return nil end
if #I == 0 then return nil end
if #I == 1 then return nil end
-- if #I ~= 2 then assert(false, ("I is not 2, it is instead: %f"):format(#I)) end
local IO = I[1]
local IU = vector_subtraction(I[2], IO)
local I_basis = {IO[1], IU[1]}
local T1A = {T1[1], T1[2]}
local T1AU = vector_subtraction({T1A[2]}, {T1A[1]})
local T1A_basis = {T1A[1], T1AU[1]}
local T1B = {T1[2], T1[3]}
local T1BU = vector_subtraction({T1B[2]}, {T1B[1]})
local T1B_basis = {T1B[1], T1BU[1]}
local T1C = {T1[3], T1[1]}
local T1CU = vector_subtraction({T1C[2]}, {T1C[1]})
local T1C_basis = {T1C[1], T1CU[1]}
local T2A = {T2[1], T2[2]}
local T2AU = vector_subtraction({T2A[2]}, {T2A[1]})
local T2A_basis = {T2A[1], T2AU[1]}
local T2B = {T2[2], T2[3]}
local T2BU = vector_subtraction({T2B[2]}, {T2B[1]})
local T2B_basis = {T2B[1], T2BU[1]}
local T2C = {T2[3], T2[1]}
local T2CU = vector_subtraction({T2C[2]}, {T2C[1]})
local T2C_basis = {T2C[1], T2CU[1]}
local points = {}
local non_intersecting = nil
local function add_unique(points, pt, eps)
for _, p in ipairs(points) do
if distance(p, pt) < eps then
return false -- Not unique
end
end
table.insert(points, pt)
return true -- Unique
end
-- T1A
local int1 = line_line_intersection(I_basis, T1A_basis)
if int1 == nil then
int1 = {solution = {}}
end
if #int1.solution ~= 0 then
local t = int1.solution[1]
local intersect = vector_addition(IO, scalar_multiplication(t, IU))
if point_line_segment_intersecting(intersect, T1A) then
if not add_unique(points, intersect, eps) then
non_intersecting = "T1A"
end
else
non_intersecting = "T1A"
end
else
non_intersecting = "T1A"
end
-- T1B
local int2 = line_line_intersection(I_basis, T1B_basis)
if int2 == nil then
int2 = {solution = {}}
end
if #int2.solution ~= 0 then
local t = int2.solution[1]
local intersect = vector_addition(IO, scalar_multiplication(t, IU))
if point_line_segment_intersecting(intersect, T1B) then
if not add_unique(points, intersect, eps) then
non_intersecting = "T1B"
end
else
non_intersecting = "T1B"
end
else
non_intersecting = "T1B"
end
-- T1C
local int3 = line_line_intersection(I_basis, T1C_basis)
if int3 == nil then
int3 = {solution = {}}
end
if #int3.solution ~= 0 then
local t = int3.solution[1]
local intersect = vector_addition(IO, scalar_multiplication(t, IU))
if point_line_segment_intersecting(intersect, T1C) then
if not add_unique(points, intersect, eps) then
non_intersecting = "T1C"
end
else
non_intersecting = "T1C"
end
else
non_intersecting = "T1C"
end
if #points ~= 2 then
-- print("Partition failure: got", #points, "points")
-- print("Triangle 1:", T1)
-- print("Triangle 2:", T2)
-- return nil
end
local quad = {}
local tri1
local A, B = points[1], points[2]
table.insert(quad, A[1])
table.insert(quad, B[1])
if non_intersecting == "T1A" then
table.insert(quad, T1A[1])
table.insert(quad, T1A[2])
tri1 = {A[1], B[1], T1B[2]}
elseif non_intersecting == "T1B" then
table.insert(quad, T1B[1])
table.insert(quad, T1B[2])
tri1 = {A[1], B[1], T1C[2]}
elseif non_intersecting == "T1C" then
table.insert(quad, T1C[1])
table.insert(quad, T1C[2])
tri1 = {A[1], B[1], T1A[2]}
end
quad = centroid_sort(quad)
if distance({quad[1]},{quad[3]}) > distance({quad[2]},{quad[4]}) then
return {
tri1 = tri1,
tri2 = {quad[2], quad[1], quad[4]},
tri3 = {quad[2], quad[3], quad[4]}
}
else
return {
tri1 = tri1,
tri2 = {quad[1], quad[2], quad[3]},
tri3 = {quad[3], quad[4], quad[1]}
}
end
end
firefox is open with a blank page but do not goto target url, please help
Use dotenv package
import dotenv from "dotenv";
and its config
dotenv.config();
it will start reading .env file contents
Or you can use a glue job, which is similar to lambda function but it doesn't support node.js, if your script is in Python, glue job might be a good alternative.
Thanks to @Matt Gibson comment, I realized that contents in an ansible.cfg must follow the provided options listing in Ansible Configuration Settings, does not support self-defined key.
No specific day. The way it works is I get the schedule and it goes into another workbook similar to the schedule. This is the workbook I work in as people pick up extra shifts for the week. Once the new schedule comes out they are available to pick up shifts for that week, hence the second workbook. Sched 1 is the original, I just pull initial data from. Once data is in WB1 (runs the 8) I add people daily or so. Once another schedule posts then WB2 (runs on the (15)) is created, I now work with WB1 and WB2 as people pick up till said schedule ends. Hope that makes since. Once WB1 has gotten through most of the week, WB2 has to be changed to (dat (8)) I have to change it to.
I think what @Tim is asking is on which day during any week do you start being concerned about looking at "next week's" schedule? Is that a clearly defined rule? (ie. up until Thursday you're always concerned with looking at this week's workbook, then Friday onwards you're looking at next week's schedule?). Obviously, if you say that the workbooks are sometimes created on Monday, but sometimes not until Tuesday, that's not necessarily ideal in terms of coding set logic. Otherwise, if your code is being triggered in a "master" spreadsheet, then just have a cell where you set the "this week" / "next week" parameter, and use the value of that cell to determine your "8 / 15" switch
I am asking this question for a project that uses only C++20, with no previous versions of C++ or C.
Please edit your question to include what you've tried and what errors or behaviour you're seeing. See this article on how to ask a question so it's possible to answer.