Sir help me code: Sarangheo Autotype javascript..
The solution for me was to switch from path.toShapes(true) to SVGLoader.createShapes(path) when using ExtrudeGeometry for the shapes.
The issue was ultimately the workflow steps and not getting all the session keys properly set. When clicking on Sign In Button, you go to https://auth.pff.com. I tried going directly to https://auth.pff.com. However, when adjusting and going to https://premium.pff.com and clicking on "sign in" button, everything populated correctly. For some reason the Session key for "loggedIn" was not getting set to True otherwise.
I did have to add 1-2 second sleep as well to make sure the Captcha Loaded... no interaction with it, but you just had to let it load.
You can find step by step explaination and you can use custom input for aho corasick algorithm here.
You could do this with randcraft
from randcraft.constructors import make_discrete
bernoulli = make_discrete(values=[0, 1], probabilities=[0.8, 0.2])
bernoulli_100 = bernoulli.multi_sample(100)
bernoulli_100.plot()
results = bernoulli_100.sample_numpy(5)
print(results)
# [10. 15. 20. 14. 24.]
where did u get the Bluetooth sdk for the ACR1255U-J1 from because mine came only with a java sdk which wont work for android?
I found the answer for this
Had to allow this permission for the EKS node IAM role
ecr:BatchImportUpstreamImage
try installing rosetta via softwareupdate --install-rosetta. i had the same issue and when running xcrun simctl list runtimes -v and saw it mentioned a lack of rosetta.
I have been facing the same issue that you have described.
After updating the library "com.google.android.gms:play-services-ad" to version "24.6.0" it got solved.
This version was realesed on September 9th and it is the latest.
I hope it works for you too!
https://mvnrepository.com/artifact/com.google.android.gms/play-services-ads/24.6.0
If the problem is located in a third-party gem instead of your own code, then it might be easier to use Andi Idogawa's file_exists gem, at least temporarily (explanatory blog post).
bundle add file_exist
Then add to e.g. config/boot.rb:
require 'file_exists'
Using an ontology to guide the tool sounds smart like checking everything carefully when you bonuskaart scannen to make sure it works as expected.
In case u’r struggling with calendly and only need an api, check out recal https://github.com/recal-dev. We also open-sourced our scheduling sdk and are integrating a calendly wrapper api rn. If u want early access, just shoot me a message [email protected]
Did you manage to run it?
i have similar problem with H747
mkdir /tmp/podman-run_old
mv -v /tmp/podman-run-* /tmp/podman-run_old/
# start all dead containers
podman start $(podman ps -qa)
I would turn to window functions and perhaps a common table expression such as:
with cte as (select * ,
row_number() over (partition by multiplier,id) as lag_multiplier
from table)
update table set id =concat(cast(cte.id,int),cast(cte.lag_multipier)
where id in (select id from table where multiplier!=0)
from table
join cte using(id);
/*Note that I don't work with UPDATE much, and haven't tested this query. So the syntax might be off. It's also a little expensive. I'm not sure if that can be improved. Best of luck.*/
Habe you Solved this Problem? I Think I have an Similar issue. Br Joachim
This is definitely feasible, but we would need to look at your webhook listener code.
From the Docusign part, please refer to this documentation on how to use and setup Connect notifications.
https://developers.docusign.com/platform/webhooks/
Thank you.
I popped by here when researching the 255 Transpose Limit, as I expect others have and may. I got a bit thrown of course, but finally straightened it out in my brain, and so thought I could make a worthwhile contribution for others passing in the future.
There are two issues here, which may not be immediate obvious.
_1) The Transpose function does not like it if it is working on a Variant element type array, where one or more of the array elements are a string of more than 255 characters.
If we are dealing with 1 dimensional arrays, as in the original question, then there is a way to get over this without looping, and still using the Transpose function: Use the Join Function on the Variant array (with arbitrary separator), then use Split function on that. We then end up with a String array, and the Transpose is happy with any elements with more than 255 characters in them
This next demo coding almost gets what was wanted here, and variations of it may be sufficient for some people having an issue with the 255 Transpose Limit.
Sub RetVariantArrayToRange() '
Let ActiveSheet.Range("M2:M5") = TransposeStringsOver255()
End Sub
Function TransposeStringsOver255()
Dim myArray(3) As Variant 'this the variant array I will attempt to write
' Here I fill each element with more than 255 characters
myArray(0) = String(300, "a")
myArray(1) = String(300, "b")
myArray(2) = String(300, "c")
myArray(3) = String(300, "d") '
' Let TransposeStringsOver255 = Application.Transpose(myArray()) ' Errors because Transpose does not work on a Variant type array if any element is a string greater than 255 characters
Dim strTemp As String, myArrayStr() As String
Let strTemp = Join(myArray(), "|")
Let myArrayStr() = Split(strTemp, "|")
Let TransposeStringsOver255 = Application.Transpose(myArrayStr())
End Function
_2) That last coding does not do exactly what was wanted. The specific requirement was along these lines, (if using the function above) :
…..select an area of 4 rows x 1 column and type "=TransposeStringsOver255()" into the formula bar (do not enter the quotes). and hit (control + shift + enter)…..
That last coding does not work to do exactly that.
As Tim Williams pointed out, the final array seems to need to be a String array (even if being held in a Variant variable ). Why that should be is a mystery, since the demo coding above seems to work as a workaround to Transpose Strings Over 255 in a Variant Array To a Range.
To get over the problem, we loop the array elements into a String array. Then the mysterious problem goes away.
This next coding would be the last coding with that additional bit
Function TransposeStringsOver255VariantArrayToSelectedRange()
Dim myArray(3) As Variant 'this the variant array I will attempt to write
' Here I fill each element with more than 255 characters
myArray(0) = String(300, "a")
myArray(1) = String(300, "b")
myArray(2) = String(300, "c")
myArray(3) = String(300, "d") ' -
' Let TransposeStringsOver255VariantArrayToSelectedRange = Application.Transpose(myArray()) ' Errors because Transpose does not work on a Variant type array if any element is a string greater than 255 characterts
Dim strTemp As String, myArrayStr() As String
Let strTemp = Join(myArray(), "|")
Let myArrayStr() = Split(strTemp, "|")
' Let TransposeStringsOver255VariantArrayToSelectedRange = Application.Transpose(myArrayStr()) ' Errors because "Seems like you need to return a string array" Tim Williams: https://stackoverflow.com/a/35399740/4031841
Dim VarRet() As Variant
Let VarRet() = Application.Transpose(myArrayStr())
Dim strRet() As String, Rw As Long
ReDim strRet(1 To UBound(VarRet(), 1), 1 To 1)
For Rw = 1 To UBound(VarRet(), 1)
Let strRet(Rw, 1) = VarRet(Rw, 1)
Next Rw
Let TransposeStringsOver255VariantArrayToSelectedRange = strRet()
End Function
To compare in the watch window:
The first coding ends up getting this array, which in many situations will get the job done for you
https://i.postimg.cc/fWYQvsTy/c-Transpose-Strings-Over255.jpg
But for the exact requirement of this Thread, we need what the second coding gives us, which is this:
https://i.postimg.cc/FRL585yP/f-Transpose-Strings-Over255-Variant-Array-To-Selected-Range.jpg
_.______________________________________-
Since we are now having to loop through each element, then we might just as well forget about the Transpose function , and change the loop slightly to do the transpose at the same time
Function TransposeStringsOver255VariantArrayToSelectedRange2()
Dim myArray(3) As Variant 'this the variant array I will attempt to write
' Here I fill each element with more than 255 characters
myArray(0) = String(300, "a")
myArray(1) = String(300, "b")
myArray(2) = String(300, "c")
myArray(3) = String(300, "d") ' -
Dim strRet() As String, Rw As Long
ReDim strRet(1 To UBound(myArray()) + 1, 1 To 1)
For Rw = 1 To UBound(myArray()) + 1
Let strRet(Rw, 1) = myArray(Rw - 1)
Next Rw
Let TransposeStringsOver255VariantArrayToSelectedRange2 = strRet()
End Function
We have now arrived at a solution similar to that from Tim Williams.
(One thing that initially threw me off a bit, was the second function from Tim Williams, as some smart people told me that to get an array out of a function, then it must be
Function MyFunc() As Variant
I never saw a function like
Function MyFunc() As String()
Before)
Hoping this bit of clarification may help some people passing as I did
Alan
Not an answer but an extension of the question.
If I want to copy the contents of say File1 to a new File2 while only being able to have one file open at a time in SD.
It seems that I can open File1 and read to a buffer until say a line end, and then close File1, open File2 and write to File2. Close File2 and reopen File1.
Then I have a problem, having reopened File1 I need to read from where I had got to when I last closed it. Read the next until say line end, close File1, reopen File2 as append and write to File2.
The append means that File 2 gradually accumulates the information so no problem but I am unclear as to how in File1 I return to the last read location.
Do I need to loop through the file each time I open it for the number of, until line end, reads previously done?
This thread looks too old but I came across to similar issue.
I am trying to copy millions of files from 1 server to another over network.
When I use the robocopy code without /mt, it looks working fine. But when I add /mt, /mt:2 etc. it stuck on same screen as above. Ram usage increasing. I have waited 20 minutes but nothing happened. It just copied the folders but not the files inside. This happens in win server 2016.
Anyone may suggest something ?
To target a specific file size (worked for jpeg), say 300kb:
convert input.jpg -define jpeg:extent=300kb output.jpg
Forces output file to be about 300 KB
It seems the issue was within Flutter's code and my IDE was trying to debug it.
My VS Code debugging configuration was set to "Debug my code + packages" so it was also trying to debug Flutter's code and that's why it would open up binding.dart because there was an error in that code.
Setting debugging config to just "Debug my code" should fix this problem!
You can do this from the bottom left in VS Code, just next to the error count and warning counts.
Edit: You can only change this when you're running a debug session. Launch a debug instance and the toggle to change this should appear in the bottom left corner.
Kafka is a steam not a format.
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "sparktest") \
.option("startingOffsets", "earliest") \
.load()
It's Python 3.12 issue, try downgrading to 3.11
In your nuxt.config.ts do:
// https://nuxt.com/docs/api/configuration/nuxt-config
export default defineNuxtConfig({
$production: {
nitro: {
preset: 'aws-amplify',
awsAmplify: {
runtime: "nodejs22.x"
},
},
},
});
I know it is old thread but I still faced this issue on windows and finally get working solution after multiple attempts
$OutputEncoding = [System.Text.Encoding]::UTF8
[System.Console]::OutputEncoding = [System.Text.Encoding]::UTF8
python script.py > output.txt
I once had to change my CER file from "UTF-16 LE BOM" to "UTF-8". Im not sure how this applies to you directly, but thats basically the error i got from openssl when working with certificates with the wrong text encoding.
I once had to change my CER file from "UTF-16 LE BOM" to "UTF-8". Im not sure how this applies to you directly, but thats basically the error i got from openssl when working with certificates with the wrong text encoding.
I also faced the same issue for so many years. and nothing found on the internet. But after so a long time, I finally got a solution for this. I found a little but excellent working add-on for this, the link I am giving below.
It's very easy, we have to just install the addon, then copy Excel data from Excel, then go to the Thunderbird compose window, and just press key combination CTRL + Q, and you are done.
No need for MS Word or any other kind of word processor. you data will be pasted as it is with rich text formating with colors also.
https://addons.thunderbird.net/en-US/thunderbird/addon/paste-excel-table-into-compose/
In 2025 I just renamed C:\project\.git\hooks\pre-commit.sample to pre-commit
#!/bin/sh
echo "🚀 Run tests..."
php artisan test
if [ $? -ne 0 ]; then
echo "❌ Test failed!"
exit 1
fi
echo "✅ Passed, pushing..."
exit 0
I believe this has something to do with virtualization but I don't fully understand what's going on, why is this and how do I fix it.
Virtualization is simple: If you have 10000 strings, the UI will only create however many ListViewItem controls are needed to fit the viewport.
When you set CanContentScroll to false, the ScrollViewer will "scroll in terms of physical units", according to the documentation. That means that all 10000 ListViewItems will be created, lagging the UI.
Is there a way to keep it False so it won't show an "empty line" at the end?
By keeping it false, you kill performance. If you want to get rid of the empty line at the bottom and eliminate lag, you should override ListView's VirtualizingStackPanel in order to override it's behavior.
<ListView ScrollViewer.CanContentScroll="True">
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<VirtualizingStackPanel ScrollUnit="Pixel"
IsVirtualizing="True"/>
</ItemsPanelTemplate>
</ListView.ItemsPanel>
</ListView>
ScrollUnit="Pixel" makes the ScrollUnit be measured in terms of pixels, which should eliminate the empty line at the bottom.
Same problem with blazor server. The nuget package BootstrapBlazor bundles the necessary bootstrap files in the staticwebassets folder so it should be properly deployed for blazor - and you can reference it as such:
<link href="_content/BootstrapBlazor/css/bootstrap.min.css" rel="stylesheet" />
M facing the same issue while upgrading mu node app to node 18, and using serverless component3.6 nextjs 14 . Tried many ways didnt find any
This is such a non isssue, just get better.
public static bool IsNegative(this TimeSpan value) => value.Ticks < 0;
The yml:
- name: RUN PYTHON ON TARGET
changed_when: false
shell: python3 /.../try_python.py {{side_a}}
become: true
become_user: xxxx
register: py_output
The script (adapted to AAP and tested locally):
# name = input()
with open("/.../try_txt.txt", "w") as file:
file.write(f"{{$1}}")
The survey contains only the "side_a" variable, and it is working already for bash cases.
Since this question is a bit old and doesn't seem to have a clear answer, here is my proposed approach.
First, I would segment the large dataset into smaller, more manageable chunks based on a time window (for example, creating a separate DataFrame for each month). For each chunk, I would perform exploratory data analysis (EDA) to understand its distribution, using tools like histograms, Shapiro-Wilk/Kolmogorov-Smirnov tests for normality, and QQ-Plots.
In a real-world scenario with high-frequency data, such as a sensor recording at 100 Hz (i.e., one reading every 0.01 seconds), processing the entire dataset at once is impossible if you're working on a local machine. Therefore, I would take a representative sample of the data. I would conduct the EDA on this sample, then calculate the normalization parameters from it. These parameters would then be used as the basis to normalize the rest of the data for that period (e.g., the entire month).
By normalizing the data to a consistent range, such as [0,1], the different segments of data should become directly comparable.
The documentation is contradictory about what is the difference between volatile keyword and VH.setVolatile
I don't remember the chapter... but the one for VarHandle explicitly states that it resembled a fullFence... which means that at least both setVolatile and getVolatile are ofseq_cst barrier.
Now, I have my doubts that the keyword version is as strong.
The reason they are so obtuse about it is that within chapter 17 they attempt to try to explain both... the lock monitor and the volatile read/writes as if they were similar.
Chapter 17 treats the concept of "Synchronization order" out of nowhere.
It doesn't explain WHAT enforces it or how it even works under the hood.
I know by experience that the keyword is a lock-queue... so it being "totally ordered" is not true for MCS/CLH lock-queues which could very well work perfectly fine with both acquire and release semantics.
But anyways...
Chapter 17.4.3 makes a subtle distinction in my mind...
It states:
"A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)"
Notice the property "synchronization order" is not explicitly granted to the "write to a volatile variable v" action/subject.
This means that the "total order property" that was previously granted to the "synchronization order" concept... is not the same as a volatile read/write as in the paragraph prior, in Chapter 17.4.2 it was implied that both where "synchronization actions"... not order.
17.4.2. Actions
An inter-thread action is an action performed by one thread that can be detected or directly influenced by another thread. There are several kinds of inter-thread action that a program may perform:
Read (normal, or non-volatile). Reading a variable.
Write (normal, or non-volatile). Writing a variable.
Synchronization actions, which are:
Volatile read. A volatile read of a variable.
Volatile write. A volatile write of a variable.
Then, in the next chapter, the "total order" property is given to the concept of "synchronization order"... but not actions.
17.4.3. Programs and Program Order
Among all the inter-thread actions performed by each thread t, the program order of t is a total order that reflects the order in which these actions would be performed according to the intra-thread semantics of t.
Which makes me guess... that what they are trying to talk about in this paragraph is about the synchronize keyword... aka the monitor/CLH queue.
In which case... YES... it behaves as a seq_cst barrier no doubt about that...
Now... going back to the first quote:
"A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)"
The fact that the documentation uses the word "variable v" implies a monotonic-base sequencing defined by a "per-address sequential consistency", which... as far as I understand... is the BASE Program Order sequencing respected by ALL memory model/processors (bare metal) ... no matter how weak or strong they are.
And if any JIT or compiler disobeys this principle... then I recommend no one should be using that implementation anyways...
Based on the phrase "all subsequent reads of v" strongly implies that the barrier is anchored by the dependency chain of the address v (monotonic dependency chain).
Hence this is explicitly defined as a release since nonrelated ops on other addresses that are not v... are still allowed to be reordered before the release.
(To me) the usage of the word "v" is the hint that the volatile keyword is an acquire/release barrier.
If not... then the documentation needs to provide more explicit wording.
But this is not just a Java issue... even within the Linux Kernel... the concept of barriers/ fences and synchronization gets mixed up... so I don't blame them.
dude, more than 5 years after and you've helped me solve my problem. Thank you very much!!!, be blessed!
The command used for broadcasting was wrong.
The correct command is:
am broadcast -n com.ishacker.android.cmdreceiver/.CmdReceiver --es Cmd "whoami"
The -n flag specifies the component name explicitly. Without it, the broadcast may not be delivered correctly to the receiver, and trying to get extras with intent.getStringExtra() will result in it returning null.
Thanks @Maveňツ for posting the suggestion in the comments.
It's been a few years since the question was asked, but since no good answer emerged, here's how I do it:
I use git's global config to store remote config blocks with fetch and push URLs, fetch and push refspecs, custom branch.<name>.remote routes, merge settings, etc.
The global config contains a config file per project, which gets included into $HOME/.gitconfig conditionally using [Include] and [IncludeIf] blocks.
[includeIf "gitdir:ia2/website/.git"]
path=ia2/website.config
[includeIf "onbranch:cf/"]
path=cloudflare-tests.config
In this example, the file $HOME/.gitconfigs/ia2/website.config is automatically included when I work on files in the $HOME/proj/ia2/website directory, which is the website for the ia2 project.
Also, in any project, I can create a branch named "cf/..." which causes the cloudflare-tests.config file to be included in git's configuration, which routes that branch to a repo I have connected to Cloudflare Pages. This allows any of my project to be pushed to a Cloudflare Pages site by simply creating an appropriate "cf/" branch in that project.
The local config (ie, the .git/config file present in each clone) doesn't contain any repo configuration, other than things that accidentally end up there. Any settings I want to keep and duplicate on other machines are moved from the local .git/config to the global $HOME/.gitconfigs/$PROJECT.config file.
Since all configs for all my projects live under the same $HOME/.gitconfigs directory, this directory is itself a git repository, which I push to github, and fetch on all machines where I need it.
I have a repository named .gitconfigs at github, and I clone this in the $HOME directory of every machine I develop on.
Each one of the projects I'm working on has its corresponding $project.config file maintained in a branch with the same name as the project, and there are some config files that are included in all projects, like the cloudflare example I gave above.
The scheme is capable of maintaining a mix of private and public projects. Configs for public projects is pushed to my public .gitconfigs repo, and the private projects get pushed elsewhere. In a company setting, your devteam might maintain a .gitconfigs private repo for.
You're welcome to inspect or fork my .gitconfigs repo at https://github.com/drok/.gitconfigs - give me a click-up if this helps you, and I welcome pull-requests. I currently have public configs for curl, git, transmission, gdb and internet archive. One benefit of sending a PR is that I can give you feedback on the whatever project you're adding. I've been using this technique for a year with huge time savings results. No more losing project-specific repo settings for me.
Why you use Breeze with Backpack?! Backpack have authorization from box. You must remove Breeeze - not needed!
I faced this problem in wsl2.
Check the permission:
ls -l /var/run/docker.sock
Correct the permission:
sudo chgrp docker /var/run/docker.sock;
sudo chmod 660 /var/run/docker.sock;
And reset to factory default the docker.
Then, In Powershell:
wsl --shutdown
After doing this you can see
docker ps
I just finally got this to work. I had tried all the documentation that you reference without success. This time around I used the PowerShell script included in this Snowflake quick start to setup the Oauth resource and client app.
https://quickstarts.snowflake.com/guide/power_apps_snowflake/index.html?index=..%2F..index#2
After using the PowerShell script to setup the enterprise apps I was still getting the bad gateway error. In my case it turns out that Power Automate was successfully connecting to Snowflake but was failing to run this connection test.
USE ROLE "MYROLE";
USE WAREHOUSE "COMPUTE_WH";
USE DATABASE "SNOWFLAKE_LEARNING_DB";
USE SCHEMA "PUBLIC";SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'PUBLIC'
-- PowerPlatform-Snowflake-Connector v2.2.0 - GET testconnection - GetInformationSchemaValidation
;
I had created a Snowflake trail account to test the Oauth connection and in that account the COMPUTE_WH warehouse was suspended. As a result the test connection query was failing. After discovering that Power Automate was successfully connecting to Snowflake I just do proper setup on the Snowflake side to get the query to run (create running warehouse, database, schema, table all usable by specified user and role).
Here are somethings to check:
If you have access to Entra ID check the sign-in logs under the service principal sign-ins tab. Verify your sign-in shows success.
In Snowflake check the sign-in logs for the user you created.
SELECT * FROM TABLE(information_schema.login_history()) WHERE user_name = '<Your User>' ORDER BY event_timestamp DESC;
Verify that you created user has default role, warehouse and name space specified.
If Power Automate was able to login check the query history for your user and see if/why the connection test query failed.
If Power Automate is successful in connecting to Snowflake but failing to run the connection test query you could try Preview version of Power Automate Add Connection window. I see it has a check box you can skip the connection test.
As of 2012, WS-SOAPAssertions is a W3C Recommendation. It provides a standardized WS-Policy assertion to indicate what version(s) of SOAP is supported.
For details on how to embed and reference a policy inside a WSDL document, refer to WS-PolicyAttachment.
Images and Icons for Visual Studio
Nuxt does not have a memory leak but Vue 3.5 is known to have one. It should be resolved when Vue 3.6 is released, or possibly you can pin to Vue 3.5.13 (see https://github.com/nuxt/nuxt/issues/32240).
Dot product is computationally faster for unit vectors since cosine similarity of unit vectors equals their dot product, but Elasticsearch can optimize the calculation. For unit vectors: cosine(A,B) = dot(A,B) since ||A|| = ||B|| = 1.
{
"mappings": {
"properties": {
"vector_field": {
"type": "dense_vector",
"dims": 384, // your vector dimensions
"similarity": "dot_product"
}
}
}
}
Your approach can cause high memory usage with large integers, as it creates a sparse array filled with undefined values. The filter step also adds unnecessary overhead. For large datasets, it's inefficient compared to JavaScript's built-in .sort() or algorithms like Counting Sort or Radix Sort for specialized cases. Stick with .sort() for practicality and performance.
Based on your setup, the inconsistency on the latency that you're experiencing possibly points toward a routing or proxy behavior difference between the external Application Load Balancer and the Classic version, rather than just a misconfiguration on your end. Though both load balancers function in Premium Tier and utilizes Google's global backbone for low-latency anycast routing through GFEs, their internal architecture are not exactly the same. For an instance, your External Load Balancer's Envoy layer with its dynamic default load balancing algorithm may re-route using alternative GFEs during intercontinental hops (for example, your test of Asia to Europe) when minor congestion occurs, which explains the 260ms-1000ms fluctuations. Meanwhile, the Classic Load Balancer sticks to a simpler, single-optimized path, minimizing fluctuations thus the consistent RTT from Seoul to europe-west2.
It might also be worth getting Google Cloud Support with all your findings to identify if this is related to a larger network problem or internal routing issue.
Your POST became a GET because of an unhandled HTTP redirect.
Your GKE ingress redirected your insecure http:// request to the secure https:// URL. Following this redirect, your requests client automatically changed the method from POST to GET, which is standard, expected web behavior.
You may try to fix the API_URL in your Cloud Run environment variable to use https:// from the start. This prevents the redirect and ensures your POST arrives as intended.
To reliably trace this, inspect the response.history attribute in your Cloud Run client code. This will show the exact redirect that occurred.
My polyfills got dropped when I upgraded angular and they needed to get re-added to angular.json (specifically, it was the angular localize line)
"polyfills": [
"zone.js",
"@angular/localize/init"
],
This is now possible with the .slnLaunch file.
Multi-project launch profiles are available in Visual Studio 2022 17.11 and later. To enable or disable the Multi-project Launch Profiles feature, go to Tools > Options > Preview Features and toggle the checkbox for Enable Multi Launch Profiles.
See: https://learn.microsoft.com/en-us/visualstudio/ide/how-to-set-multiple-startup-projects?view=vs-2022
My polyfills got dropped when I upgraded angular and they needed to get re-added to angular.json (specifically it was the localize line)
"polyfills": [
"zone.js",
"@angular/localize/init"
],
Pehli Script: Shuruat Aur Mulaqat
SCENE 1: BHAI KI MAUT
(Ek sunsaan gali. Raat ka samay. Arjun ka bhai, AMIT, zameen par gira hua hai. SHERA uske paas aata hai.)
SHERA: Ab bolo, Rana kahan hai? Uska pata ab bhi nahi doge?
AMIT: (mushkil se bolta hai) Main... tumhe uske baare mein kuchh nahi bataunga.
SHERA: (zor se) Tum jaise chote-mote log humse panga nahi lete! Aaj ke baad koi humare raaste mein nahi aayega!
(Shera apne haath uthata hai. Uski aankhon mein gussa hai.)
SHERA: (Bunty se) Khatam karo iska khel.
(Camera Amit ke chehre par focus karta hai. Screen kaali ho jati hai, aur goli chalne ki awaaz sunai deti hai.)
SCENE 2: BADLE KA FAISLA
(Arjun ka ghar. Subah ka samay. Arjun phone par baat kar raha hai. Uska chehra sunn hai. RAJ, SAMEER, aur DEEPAK uske paas aate hain.)
SAMEER: Bhai, kya hua? Bol!
(Arjun tezi se mudta hai. Uski aankhon mein laal rang dikhta hai.)
ARJUN: (gusse se) Shera... usne mere bhai ko maar diya. Woh sochta hai ki woh bach jayega? Nahi! Main usse zinda nahi chhodunga!
DEEPAK: Bhai, wo bahut khatarnak aadmi hai.
ARJUN: (Deepak ki taraf dekhte hue) Tabhi toh hum use marne se pehle uski takat ko khatam karenge. Raj, uske har ek location ka pata lagao. Deepak, uske saare dhandhon ki khabar lao. Sameer, tum mere saath rahoge. Aaj ke baad, hum sirf ek hi cheez ke liye kaam karenge... badle ke liye!
(Screen kaale rang mein dhal jaati hai.)
SCENE 3: RANA SE MULAQAT
(Ek purani warehouse. Raat ka samay. ARJUN aur SAMEER darwaze par khade hain. Andar se ROHIT bahar aata hai.)
ROHIT: Kaun ho tum log?
ARJUN: Mera naam Arjun hai. Mujhe Rana se milna hai.
(Rohit unhe andar aane deta hai. RANA apni kursi par baitha hai.)
RANA: Tum yahan kya kar rahe ho? Tum jaiso ko main aam taur par apne ilake mein nahi aane deta.
ARJUN: Mujhe tumhari madat chahiye. Hum dono ka dushman ek hi hai, Shera.
RANA: (dheere se haskar) Tum usse ladna chahte ho? Tumhe lagta hai ki tum usko hara sakte ho?
ARJUN: Par main akela nahi hoon. Aur tum bhi nahi ho. Hum dono milkar usse hara sakte hain.
RANA: Toh tum kya chahte ho?
ARJUN: Badla. Tumhe apna ilaka wapas milega, aur mujhe mere bhai ki maut ka badla.
RANA: (aahista se) Agar hum mile, toh uske liye ek hi shart hai. Ladai sirf hamare tarike se hogi.
ARJUN: (has kar) Mujhe manzoor hai.
(Dono haath milate hain. Dono ke chehre par ek nayi aur khatarnak muskaan aati hai.)
Doosri Script: Pehla Hamla Aur Ant
SCENE 4: TAQDEER KI JUNG
(Ek chhota sa factory. Raat ka samay. Arjun aur Sameer chhipe hue hain. Raj phone par unse baat kar raha hai.)
RAJ (PHONE PAR): Location confirm hai bhai. Shera ke do bade truck yahan se nikalne wale hain.
ARJUN: (Sameer se aahista se) Ready rehna, hume unhe rokna hai.
(Rana aur Rohit ek taraf se factory ke andar aate hain. Rana shotgun se darwaze ko tod deta hai. Alarm bajne lagta hai.)
RANA: Yahi toh hum chahte hain. Ab Shera ke aane ka intezar karte hain.
(Andar se goonde nikalte hain. Sameer unse ladne lagta hai, aur Arjun dur se use bachata hai. Dono milkar goondo ko harate hain.)
ARJUN: (Rana se) Yeh humara pehla mission hai. Hum isse haath se jaane nahi de sakte.
SCENE 5: DHOKHA AUR JAAL
(Shera ka gupt office. Din ka samay. Shera gusse mein baitha hai.)
SHERA: Yeh kaise ho sakta hai? Rana aur woh ladka, milkar hamare trucks ko kaise rok sakte hain?
RAVI: (darrte hue) Boss, maine suna hai ki woh dono ab saath hain.
SHERA: (zor se) Woh donon? Akele Rana ko toh maine kab ka khatam kar diya hota.
BUNTY: Boss, hum unhe pakadne ka ek plan banate hain.
(Shera apne dimag mein ek plan banata hai. Uska chehra bilkul shaant ho jata hai.)
SHERA: Ab hum unhe ek aisi jagah bulayenge jahan se woh zinda wapas nahi ja payenge.
SCENE 6: BADLE KA ANTT (CLIMAX)
(Ek bada, purana godown. Raat ka samay. Arjun aur Rana andar aate hain.)
SHERA: (unhe dekhkar) Toh, aakhirkar tum aa hi gaye. Mujhe laga tha ki tum dar jaoge.
ARJUN: Hum darne walo mein se nahi hain. Tumhe jo karna hai, kar lo. Hum bhi taiyaar hain.
(Achanak godown ki lights band ho jati hain aur goli chalne ki awaaz aati hai.)
RANA: (chilakar) Yahi hai uska jaal!
(Andhere mein ladai shuru ho jati hai. Aakhir mein, Rana aur Arjun milkar Shera ko pakad lete hain.)
ARJUN: (Shera ke paas aata hai) Tumne socha tha ki tumne mere bhai ko maar diya toh tum jeet gaye. Par tum galat the. Badle ki aag kabhi shant nahi hoti.
(Arjun Shera ko dekhkar muskurata hai. Uski aankhon mein jeet hai. Screen kaali ho jati hai.)
I encountered the same issue using amazoncorretto:21-alpine. In my case, the fix was simply forcing the version of io.grpc:grpc-netty-shaded from 1.70.0 to 1.71.0. No changes were needed to the Docker image itself.
Check current setup
which -a python3
python3 --version
You'll probably find /usr/local/bin/python3 (3.9) ahead of /usr/bin/python3 (3.13).
Option 1: Use the system Python directly
/usr/bin/python3 --version
Option 2: Fix PATH so 3.13 is default
Edit ~/.zshrc (or ~/.bashrc) and add:
export PATH="/usr/bin:$PATH"
Then restart your shell.
Now python3 points to macOS's default (3.13).
Option 3: Use pyenv to manage multiple versions
If you need both 3.9 and 3.13:
brew install pyenv
pyenv install 3.9
pyenv install 3.13
pyenv global 3.13 # default everywhere
pyenv local 3.9 # per-project
✅ TL;DR: Don't remove or tamper with system Python.
To get back to 3.13 → repair your PATH.
To toggle between versions easily → use pyenv.
It's 2025 and Sublime Text 4 still has no option to disable history changes after closing the app. I sometimes hit undo (CTRL+Z) by mistake and don't know which was the last state of the file.
Luckily, I use Github and I can discard the changes of that file. Closing the file also helps, but is just tiring to close the files every time in a big project. Sublime 3 did not have this issue, as you mentioned above.
The new Freename.com platform started offering traditional domains (.com etc) too now. So they offer both and you can mirror a .com or any traditional gTLD or ccTLD on chain. Called Name Your Wallet
I am a bit late, but if you are using vite for react, make sure to modify your vite.config.js such as
server:{
host: true
}
You can fix this by adding an isset check in your TPL file. The error occurs because $cart.subtotals.tax is null but the template tries to access its properties. In your themes/your_theme/templates/checkout/_partials/cart-summary-totals.tpl file, find the line causing the error (around line 77) and wrap the tax-related code with isset(): This prevents the template from trying to access properties of a null value. Clear your cache afterward. The issue typically happens when tax rules aren't properly configured in International > Taxes > Tax Rules.
from docx import Document
from docx.shared import Pt
doc = Document()
def add_section_title(text):
p = doc.add_paragraph()
run = p.add_run(text)
run.bold = True
run.font.size = Pt(12)
p.space_after = Pt(6)
doc.add_heading('Questionário para Entrevista de Descrição de Cargos', level=1)
# Seção 1
add_section_title('1. Informações Gerais')
doc.add_paragraph('• Nome do empregado: ______________________________________________________________')
doc.add_paragraph('• Cargo atual: ________________________________________________________________________')
doc.add_paragraph('• Departamento/Setor: _______________________________________________________________')
doc.add_paragraph('• Nome do gestor imediato: __________________________________________________________')
doc.add_paragraph('• Tempo no cargo: ____________________________________________________________________')
# Seção 2
add_section_title('2. Objetivo do Cargo')
doc.add_paragraph('Como você descreveria, em poucas palavras, o principal objetivo do seu cargo?')
for _ in range(3):
doc.add_paragraph('________________________________________________________________________________')
# Seção 3
add_section_title('3. Principais Atividades')
doc.add_paragraph('Liste as principais atividades e tarefas que você realiza no dia a dia:')
for i in range(1, 6):
doc.add_paragraph(f'{i}. ________________________________________')
doc.add_paragraph('Quais atividades são realizadas com mais frequência (diárias/semanalmente)?')
for _ in range(2):
doc.add_paragraph('________________________________________________________________________________')
doc.add_paragraph('Quais atividades são esporádicas (mensais, trimestrais ou eventuais)?')
for _ in range(2):
doc.add_paragraph('________________________________________________________________________________')
# Seção 4
add_section_title('4. Responsabilidades e Autoridade')
doc.add_paragraph('• Quais decisões você pode tomar sem necessidade de aprovação do superior?')
for _ in range(3):
doc.add_paragraph('________________________________________________________________________________')
doc.add_paragraph('• Você é responsável por supervisionar outras pessoas? ( ) Sim ( ) Não')
doc.add_paragraph('Se sim, quantas e quais cargos? ______________________________________________________')
doc.add_paragraph('• Há responsabilidade financeira? (ex: orçamento, compras, contratos)')
for _ in range(2):
doc.add_paragraph('________________________________________________________________________________')
# Seção 5
add_section_title('5. Relacionamentos de Trabalho')
doc.add_paragraph('• Com quais áreas/departamentos você interage com frequência?')
doc.add_paragraph('________________________________________________________________________________')
doc.add_paragraph('• Existe interação com terceiros, fornecedores ou usuários? Descreva:')
for _ in range(2):
doc.add_paragraph('________________________________________________________________________________')
# Seção 6
add_section_title('6. Requisitos do Cargo')
doc.add_paragraph('• Conhecimentos técnicos essenciais:')
for _ in range(4):
doc.add_paragraph('________________________________________________________________________________')
doc.add_paragraph('• Ferramentas, sistemas ou softwares utilizados:')
for _ in range(3):
doc.add_paragraph('________________________________________________________________________________')
doc.add_paragraph('• Escolaridade mínima necessária:')
doc.add_paragraph('________________________________________________________________________________')
doc.add_paragraph('• Certificações ou cursos obrigatórios:')
for _ in range(5):
doc.add_paragraph('________________________________________________________________________________')
# Seção 7
add_section_title('7. Competências Comportamentais')
doc.add_paragraph('Quais habilidades comportamentais são mais importantes para este cargo?')
for _ in range(5):
doc.add_paragraph('________________________________________________________________________________')
# Seção 8
add_section_title('8. Indicadores de Desempenho')
doc.add_paragraph('Como o desempenho neste cargo é avaliado? Quais indicadores são usados?')
for _ in range(4):
doc.add_paragraph('________________________________________________________________________________')
# Seção 9
add_section_title('9. Desafios do Cargo')
doc.add_paragraph('Quais são os maiores desafios ou dificuldades que você enfrenta neste cargo?')
for _ in range(4):
doc.add_paragraph('________________________________________________________________________________')
# Seção 10
add_section_title('10. Sugestões para Melhorar o Cargo')
doc.add_paragraph('Você tem sugestões para melhorar a descrição ou a execução do seu cargo?')
for _ in range(5):
doc.add_paragraph('________________________________________________________________________________')
# Observações Finais
add_section_title('✅ Observações Finais')
for _ in range(3):
doc.add_paragraph('________________________________________________________________________________')
# Salvar o arquivo
doc.save("Questionario_Descricao_de_Cargos.docx")
print("Arquivo salvo como 'Questionario_Descricao_de_Cargos.docx'")
Use :
npm i cloudinary@"^1.21.0"
npm i cloudinary@"^2.7.0"
After that try npm i multer-storage-cloudinary
Hope it also works for you.
Android devices support StrongBox and iOS supports Keychain with optional biometric authentication, making these options more secure. I believe this is updated information that could benefit others. Here’s the Stack Overflow link for reference.
The official documentation states that: "Many applications use one database and would never need to close it (it will be closed when the application is terminated). If you want to release resources, you can close the database."
So the problem was that my bind_user didn't have the permission to read my directory. Using my root account, I managed to perform the authentication process.
1. Export height map into photoshop
2. In photoshop open 2nd alpha channel
3. Image > Correction > brightness and contrast > increase brightness (enable "using previous" checkbox)
4. Export heightmap as a copy
5. Import height map and lower the terrain object
6. Now since your "0" is lower you can paint lower
1.Add debug flags when creating RawKernel:
compute_systemG_kernel = cp.RawKernel(
lines, "compute_systemG_kernel",
options=("-G", "--generate-line-info")
)
2.Launch with:
cuda-gdb --args python train.py
sounds like a weird use case but I should have more detail to understand deeper.
Anyway, my suggestion is using the right combination of the resolution modifiers
https://angular.dev/guide/di/hierarchical-dependency-injection#resolution-modifiers
imo if you are a smaller organization and strict on security neither is a good idea because you make your system vulnerable to probing attacks, i.e. when a malicious actor tries to find out if a user with a given email address already exists. While 409 is semantically correct for the state of the resource, exposing that information creates a vulnerability. The secure way to handle this is to make your API's response ambiguous. The sign-up endpoint should always return the same generic, success-like response, regardless of whether the email already exists. E.g. a 200 or 202 will do. I am aware this is rather bad from a UX perspective but unless you have some advanced probing identification like Google, I suggest against sharing if an email exists.
I find the solution, the original project was installed with django 4.3, but i have django 5.0 in this moment, so the solution was delete the admin folder from static and run the project again creating the new style files, that fix the css issues.
Thanks for the help.
Thanks for taking the time to contribute an answer. It’s because of helpful peers like yourself that we’re able to learn together as a community.
As far as I can tell, a general purpose solution to the original question cannot exist. C3roe brought up a good point in the comments: for any solution to the original question to exist, applyCardBorders() would need to run not only when the user opens the print dialogue, but also any time they changed the paper size, margins, scale, etc. within the print dialogue. No such hook exists.
Even using max-width: 6in; doesn't work when the screen is narrower than 6 inches. It only works when the screen size at least as wide as 6 inches. In general, the drawn borders will render correctly in the print preview if the card on screen is already at its maximum width and that maximum width not wider than it would be in the paper. However, using width: 6in; would be better if you want a specific size.
Printing layouts are tricky, but if you know you will be printing to a specific size, you could do the following:
<div id="print-area">
<div class="card">
<p>lorem ipsum</p>
<p>lorem ipsum</p>
</div>
</div>
const printWidth = '10.5in';
const printArea = document.getElementById('print-area');
window.addEventListener('beforeprint', () => {
printArea.style.width = printWidth;
applyCardBorders();
});
window.addEventListener('afterprint', () => {
printArea.style.removeProperty('width');
applyCardBorders();
});
You could even make a dropdown for paper sizes on the screen and give the dropdown a class of screen-only:
@media print {
.screen-only {
display: none !important;
}
}
I think the cleanest way to do this is with Docker containers. You can run Linux docker containers with WSL2. Simply mount your Windows project directory in the Docker container and then run your node script. Everything will work as you expect it without all the spawnSync hocus pocus.
UPDATE:
By now with the code below, I do get a list of the letters in the alphabet, and when clicking on one of those, the place names starting with that letter do appear, and are correctly clickable. But I still get the error about the Column aliases when I click on 'show counts'...
class PlaceFilter(admin.SimpleListFilter):
title = 'first letter of place name'
parameter_name = 'letter'
def lookups(self, request, model_admin):
qs = model_admin.get_queryset(request)
letters = list(string.ascii_uppercase)
options = [(letter, letter) for letter in letters]
if val := self.value():
print(val)
if len(val) == 1:
sub_options = list(qs.values_list('place__name', 'place__name').distinct() \
.filter(place__name__iregex=rf"^('s-|'s )?{val}") \
.order_by('place__name'))
val_index = options.index((val, val))
index = val_index + 1
for option in sub_options:
options.insert(index, option)
index += 1
return list(options)
def queryset(self, request, queryset):
if self.value() and len(self.value()) > 1:
return queryset.filter(place__name=self.value())
Issue relies here : regexp_extract_all will return a list, use regexp_extract instead
Finds non-overlapping occurrences of regex in string and returns the corresponding values of group.
If string contains the regexp pattern, returns the capturing group specified by optional parameter group
I ran into the same error on an Azure Virtual Machine configured as a self-hosted Linux build agent for Azure DevOps. In our case, the problem was caused by insufficient memory. After increasing the VM size from 2 GB to 8 GB of RAM, the error was resolved.
Make sure Standard Architecture is selected in Your Target -> Build Settings -> Architectures -> Standard Architectures
This will work:
this.audio = document.createElement('audio');
document.body.appendChild(this.audio);
this.audio.onplay = (ev) => this.audio.setSinkId(this.deviceId)
this.audio.src =... and .play()
Tested on Xcode 26
Shortcut: cmd + cntrl + T
OR
On top right click on "+" Add icon and select "Editor pane on right".
Pretty much the same result as the other answers but maybe put in simpler words instead of citing the specification.
The short answer is: You get a char array which has a zero inside.
The longer answer:
The C language has no real strings. Instead C only has char arrays which are interpreted in a defined manner.
The way to initialize a char array via quotation marks is just syntactic sugar and identical to define an array with numbers (except that the last character is filled with a 0)
What does that mean?
The compiler only sees an array of values and it has no real idea if the array represents an array of numeric values or something string-like which is passed to any string related functions.
Only we know whether a char array is a real string or an array of numeric values.
Thus it would be very dangerous if a compiler would be allowed to do any implicit string optimizations.
That also fits to the other too common problem of C:
If a char array misses the zero terminator then the (unsafe) string functions continue to read until a zero is found somewhere. The compilers may report warnings and give hints to use the safer string functions but the compilers are not able to fix this problem by themselves. Any attempt to let the compiler fix this will probably result in many more problems.
thanks it works on my page https://lal-c.blogspot.com/p/darelm_3.html#
<style>
.grid-container {
column-count: 4;
column-gap: 0;
width: 100%;
max-width: 1200px;
margin: 0 auto;
}
.grid-block {
break-inside: avoid;
padding: 10px;
box-sizing: border-box;
width: 100%;
display: inline-block;
}
.grid-block h3 {
margin: 0 0 8px 0;
font-size: 1.1em;
}
.grid-block ul {
list-style: none;
margin: 0;
padding: 0;
}
.grid-block li {
margin: 0;
padding: 2px 0;
}
.grid-block a {
text-decoration: none;
color: inherit;
display: block;
}
</style>
Created with your code + the help of AI / Copilot
Both are correct, keyword 'as' is recommended to make the renaming and makes your query more readable
After downgrading ojdbc11.version to 21.11.0.0 resolved the issue. seems latest version have conn leakage.
in SolrCloud you can’t load a 120 MB file into ZooKeeper (even with -Djute.maxbuffer), and absolute paths fail because Solr treats them as ZK configset resources unless you explicitly allow external paths. the way to fix this is to mount the file on a filesystem accessible to all Solr pods (e.g via a Kubernetes PersistentVolume or by embedding it in the image) at a stable location such as /solr-extra/keepwords.txt, then start Solr with -Dsolr.allowPaths=/solr-extra -Dkeepwords.file.path=/solr-extra/keepwords.txt (in the Bitnami chart this can be passed through extraEnvVars or solrOpts). in your schema you can then reference the file either with ${keepwords.file.path} or directly as an absolute path (words="/solr-extra/keepwords.txt"), and Solr will load it from disk rather than from ZooKeeper. This will avoid the path mangling you had seen (/configs/coreName/...) and is the only reliable way to use a large keepwords list in SolrCloud; ZooKeeper and managed resources are unsuitable for files of that size
This is a common issue in Fabric/PBI.
Debugging direct in PBI can be tricky; the easiest way to connect your Semantic Model to the sepatate tool - Tabular Editor and fix the sorting there.
In my case, the folder name and the file name were the same, leading to this error.
Fast-track your career in construction! Join a 6-month MEP with BIM Diploma and learn Revit MEP, clash detection & coordination. Roles: BIM Modeler, Designer, Coordinator. Practical skills, global opportunities, AI-ready https://arrowwingsacademy.com/
Duplicate count metric hota hai jab ek hi scheduler multiple pods me parallel run karta hai, aur har pod same kaam ko duplicate tarike se report kar deta hai.
Pod = ek chhota container group jo Kubernetes (ya orchestration system) me chal raha hota hai.
Agar aapka scheduler service multiple pods me chal raha hai, matlab ek hi kaam karne wale multiple copies chal rahe hain.
Scheduler ek service hai jo tasks ko time pe chalata hai (jaise cron jobs).
Har pod apne hisaab se same task ko chalane ki koshish karega.
Service account keys makes your google account vulnerable, they need to be managed.
https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
You need to have a procedure in place to manage their lifecycle with key rotation.
It turns out it was a bug on the tile cutter. The order in witch it cropped the tiles was incorrect and i didn't notice while giving a look at them "manually".
I encourage you to double check the tiles if something like this happens to you.
The code works fine as is.
brew tap real420og/stdout-browser
brew install stdout-browser
ls -la | stdout-browser
Import the Leaflet CSS with
import 'leaflet/dist/leaflet.css';
You have three commands here:
checkout scm # This is the basic jenkins clone which you often don't even need to explicitly call
dir() # Make a work dir and run some commands insideit
checkout( .. # Checkout using the git plugin https://plugins.jenkins.io/git/ with a lot more control over the checkout behavior
In this case, the you are making two checkouts, one inside a subdir and with more detailed options (potentially overriding the branch or upstream URL).
the error happens because jupyter always runs an asyncio event loop, which conflicts with Playwright’s Sync API. to fix it, the cleanest approach is to switch to Playwright’s Async API (from playwright.async_api import async_playwright) and call your function with await in the notebook. If you want keep the Sync API version, you can instead run it inside a separate thread (so it’s outside Jupyter’s loop). In vs code, the “module not found” issue comes from using a different Python environment than your notebook: make sure both point to the same interpreter and install Playwright with python -m pip install playwright followed by python -m playwright install.
Go to the file and change the permissions:
Locate the file in Finder
Select File -> Get Info
Scroll down to Sharing and Permission
Change the permission accrodingly
According to ?withVisible (which informs about visibility), it is used by source and indeed, I have not found a way to circumvent that.
I would therefore suggest to wrap source in an anonymous function, keeping only its value:
lapply(list_of_scripts,
function(file) { source(file)$value } )
use ReplaceItemAsync + _etag
ItemRequestOptions options = new ItemRequestOptions
{
IfMatchEtag = doc._etag
};
await container.ReplaceItemAsync(
item: updatedDoc,
id: updatedDoc.id,
partitionKey: new PartitionKey(updatedDoc.pk),
requestOptions: options);
young-ink-verse
A
📚 Nome do Micro SaaS (provisório)
Luz das Palavras
Portal Literário
Jovem Escritor
Entrelinhas
InspiraBooks
---
🖥️ Estrutura básica do app
1. Tela inicial (Home)
Destaques de livros e autores jovens.
Botão: “Quero publicar” / “Quero ler”.
2. Cadastro / Login
Usuário escolhe se é leitor, escritor ou ambos.
3. Área do Escritor
Criar livro (título, sinopse, gênero).
Editor de texto integrado.
Ferramenta simples de criação de capa (desenho ou upload).
Publicar livro (grátis ou pago).
4. Área do Leitor
Biblioteca com categorias.
Ler online dentro do app.
Curtir, comentar, seguir autores.
5. Interação & Comunidade
Chat entre leitores e escritores.
Espaço de desafios de escrita (ex: concurso de redações).
6. Gamificação (pra engajar jovens)
Medalhas por escrever/ler mais.
Ranking de autores e leitores ativos.
🔹 Fluxo de Telas do App Literário
1. Tela de Boas-vindas
Logo + nome do app.
Frase de impacto: “Escreva. Compartilhe. Inspire.”
Botões: Entrar | Criar conta
---
2. Cadastro / Login
Nome, e-mail, senha.
Pergunta: “Você é…” → Leitor / Escritor / Ambos
Confirmação → vai para a Home.
---
3. Home (Tela Principal)
Menu em cima: Livros | Escrever | Comunidade | Perfil
Destaques: livros populares, autores novos, novidades.
Botão: Publicar meu livro (se for escritor).
---
4. Área do Escritor
Criar novo livro → formulário:
Título, gênero, sinopse.
Editor de texto (pra escrever capítulos).
Criador de capa (desenhar ou enviar imagem).
Botão: Publicar (pode ser grátis ou pago).
---
5. Área do Leitor
Biblioteca → com filtros: Gênero, Autor, Mais lidos, Novos.
Tela do livro:
Capa + título + autor.
Botões: Ler agora | Curtir | Comentar.
Leitura no app → tipo wattpad, deslizando capítulos.
---
6. Comunidade
Feed de posts (autores podem compartilhar novidades).
Desafios literários: concurso de redação, mini-contos.
Chat simples (escritor ↔ leitor).
---
7. Perfil do Usuário
Foto, nome, bio.
Estatísticas: livros lidos, livros publicados, medalhas.
Configurações da conta.
---
8. Gamificação (opcional para a primeira versão)
Medalhas: “Leitor ativo”, “Primeiro livro publicado”, etc.
Ranking mensal de escritores e leitores
Lovable
7:02 AM on Sep 25
Thought for 18 seconds
Vou criar o Entrelinhas - uma plataforma literária vibrante para jovens escritores! Inspirado no Wattpad mas com identidade brasileira moderna.
Design que vou implementar:
Cores: Gradientes de roxo/violeta (criatividade) com toques dourados (inspiração)
Estilo: Moderno, jovem, clean mas vibrante
Typography: Elegante para literatura
Animações suaves para engajamento
Primeira versão incluirá:
Home com hero inspirador e destaques
Navegação entre Ler/Escrever/Comunidade
Área do escritor (criar livros, editor básico)
Biblioteca de livros com categorias
Design responsivo e moderno
Vou criar o Entrelinhas - uma plataforma literária vibrante para jovens escritores! Inspirado no Wattpad mas com identidade brasileira moderna.
Design que vou implementar:
Cores: Gradientes de roxo/violeta (criatividade) com toques dourados (inspiração)
Estilo: Moderno, jovem, clean mas vibrante
Typography: Elegante para literatura
Animações suaves para engajamento
Primeira versão incluirá:
Home com hero inspirador e destaques
Navegação entre Ler/Escrever/Comunidade
Área do escritor (criar livros, editor básico)
Biblioteca de livros com categorias
Design responsivo e moderno
Vou criar o Entrelinhas - uma plataforma literária vibrante para jovens escritores! Inspirado no Wattpad mas com identidade brasileira moderna.
Design que vou implementar:
Cores: Gradientes de roxo/violeta (criatividade) com toques dourados (inspiração)
Estilo: Moderno, jovem, clean mas vibrante
Typography: Elegante para literatura
Animações suaves para engajamento
Primeira versão incluirá:
Home com hero inspirador e destaques
Navegação entre Ler/Escrever/Comunidade
Área do escritor (criar livros, editor básico)
Biblioteca de livros com categorias
Design responsivo e moderno
There is a script for that, works nicely: