I was inattentive when looking through workflow run logs, because there were messages giving a hint on what needs to be adjusted:
The build scan was not published due to a configuration problem.
The Gradle Terms of Use have not been agreed to.
For more information, please see https://gradle.com/help/gradle-plugin-terms-of-use.
I looked through https://gradle.com/help/gradle-plugin-terms-of-use, and applied the following changes:
Added com.gradle.develocity
plugin to my settings.gradle:
plugins {
id 'com.gradle.develocity' version '4.0.2'
}
Added develocity
config to my build.gradle:
develocity {
buildScan {
termsOfUseUrl = "https://gradle.com/help/legal-terms-of-use"
termsOfUseAgree = "yes"
}
}
And now my workflow publishes Build Scans:
to test any plist use the command plutil:
plutil -lint ~/Library/LaunchAgents/yourplistfile.plist
answer should be something like yourplistfile.plist: OK
Can you trying using one of these flags while starting the browser?
--start-maximized Starts the browser maximized, regardless of any previous settings.
or
--start-fullscreen Specifies if the browser should start in fullscreen mode, like if the user had pressed F11 right after startup.
More Info:
When you build HTML as a string in Angular without using sanitizer.bypassSecurityTrustHtml
, Angular will sanitize the content for security. That means it removes certain elements it considers risky — like <input>
buttons — to protect against attacks.
That’s why:
Elements like <a>
and <br>
still show up — they’re safe.
The <input type="button">
does not show — Angular blocks it.
When you use sanitizer.bypassSecurityTrustHtml
, you’re telling Angular:
I know this HTML is safe — don’t filter it.
So Angular keeps everything, including the input.
This post is a bit old, but I personally believe that data shouldn't be retrieved from the database via OnInitialized / OnInitializedAsync when server pre-rendering is active. Depending on the size of the result, you might end up waiting twice. That's inefficient. I've decided to use OnAfterRender / OnAfterRender instead.
public partial class ReordEntities
{
private IList<IRecord> Records { get; set; } = new List<IRecord>();
override protected OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
// Load data in here
Records = await LoadRecords();
// Rerender UI
StateHasChanged();
}
}
}
Don't forget to call StateHasChanged();
, otherwise the components won't be aware of the loaded data. And make sure you call StateHasChanged();
within the if statement, otherwise a render loop will be created.
If loading the data takes 10 seconds, it makes a difference to me whether I wait 10 or 20 seconds. And in general, I think we shouldn't forget that database resources are quite expensive, and if I can avoid pointlessly retrieving the data from the database twice, I'll do it.
Best regards,
Marcus
Try customizing Additional CSS
button.ast-menu-toggle {
width: 100%;
text-align: right;
}
I struggled with this for 2 days and finally figured out the issue.
If you run the `file` command on this file, you will probably get the following result: ASCII text.
In our case, the library was pushed to version control. But due to the binary being quite large, it was stored using git LFS. Pulling these files using `git lfs fetch` and `git lfs pull` resolved the issue.
According to the link in the error message (https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset), you should rather change the implementation of the __iter__
method so that it behaves differently based on which worker calls it, or change the worker_init_fn
(see their two code examples).
Should I modify it after the fact in each worker so that each worker gets a dataset that's 4 times smaller?
Yes, from what I understand, this will make each worker fetch `1 / num_workers` of the dataset.
import datetime
now = datetime.datetime.now()
tz = datetime.timezone(datetime.timedelta(hours=-6), name="CST")
now_tz = now.replace(tzinfo=tz)
now_tz.isoformat("#", "milliseconds")
When you hit the /actuator/heapdump
endpoint:
A full GC (Garbage Collection) is often triggered before or during the heap dump process.
This is to ensure that the heap dump reflects the most accurate state of live objects.
The JVM tries to clean up as much as possible before writing the dump to reduce file size and improve clarity.
This GC can significantly reduce memory usage, especially if there was a lot of garbage (unreferenced objects) in memory.
The heap dump itself does not clear memory, but the GC that precedes it does.
Try adding become: true
in your play
Thankfully its working now !!
Fix #1
You just have to zip ONLY the files and upload it on lambda function else it won't be able to find your file.
Fix #2
I did switch the regions in between which could potentially cause misunderstanding so i deleted existing functions and API gateways and made new one.
Fix #3
I replaced the Access-Control-Allow-Origin: of api gateway from "*" to my deployed URL link.
Fix #4
I increase the Lambda timeout to match or closely align with API Gateway’s 29s limit.
Estava com o mesmo erro, e essa dica me ajudou(uso a versão 0.21.2)
body {background-image:
radial-gradient(closest-corner circle at -10% 15%, #D28CDE 0%, rgb(249, 249, 249, 1)300%,transparent),
radial-gradient(closest-corner circle at 100% 10%, #7A5AC7 0%,rgb(92, 50, 180, 0.01) 400%,transparent);
background-color: #f9f9f9;
}
Thanks
When I get this error it is beacuse I get an Out Of memory (OOM) error, so the training is taking more GBs of RAM/GPU than the available, and then the operative system kills the process. Could this be happening you?
The answer from Дмитрий Винник worked for me but I needed to install selenium-manager first.
conda install conda-forge::selenium-manager
To access any files in your repository, the workflow first need to checkout this repository.
Add the following step above any steps that require accessing files from your repository:
- name: Checkout repository
uses: actions/checkout@v4
source of the underlying action: actions/checkout
I'm having the same error.. Have you find a solution ?
Your system may not install automaticaly podman-machine
with podman
. I recommend that you check if its installed or try installing it regardless.
For anyone finding this (like I did) while trying to solve this problem today, this is the best solution I could come up with:
Cascading Parameter Example | Tableau Public
The basic mechanics (take a look at the public workbook above to see it in action):
Separate sub_parameters for each main_parameter option
All sub_parameters are floating and stacked on top of each other on the dashboard
Visibility of sub_parameters is controlled with the "Control visibility using value" setting on the Layout tab. This points to a separate calculated boolean field for each sub_parameter so that only the appropriate one is showing at any given time
A final calculated field chooses the correct sub_parameter based on the main_parameter selection.
Same here
I am also looking for the solution.
Tried too much but still not getting any solution
You should use Vuforia Engine 11.2+ . The older versions do not support Unity 6 (see here https://developer.vuforia.com/news)
It doesn’t seem to be a widely recognized pattern, so it's probably a custom blend of MVVM and MVP. Think of it like using MVVM’s ViewModel for state and data-binding, while also having a Presenter (like in MVP) to handle user interactions, navigation, or coordination logic. The View connects with the ViewModel for state, and the Presenter takes care of the flow and event handling. This kind of setup helps keep your ViewModels clean and easier to test. I’d suggest checking how the View, ViewModel, and Presenter are wired up in your codebase, it’ll help clarify things. Also, maybe ask your teammates if there’s any internal architecture diagram, they might already have one shared.
In the initial days, the metadata (key ranges to partition ID map) is stored in the DynamoDB itself. The router used to download entire metadata -- Caused spikes!!!
Later, AWS built MemDS to store the metadata.
Redis offers Redis Data Integration (RDI) for this. With RDI you can sync your Redis with the Postgres tables you want and transform the data to any Redis data type you want without coding.
I am from Ukraine and I use the interface as in the picture. Please help me solve this issue.
For me the following, usually helpful, import turned out to be the culprit:
import findspark
findspark.init()
First enroll the device > create a compliance policy for unmanaged device > put it through conditional access.
Migration takes time! let's not hurry.
The left table should be the bigger one and the right table the smaller one for broadcast join to be considered in spark. It is explained very nicely in this thread. Refer the top rated answer in the link below.
Broadcast join in spark not working for left outer
I also had the same issue. The solution was to disabled customized SMTP... and everything else worked successfully.
Go to supabase, authentification> Emails> SMTP settings, and deactivate and save changes.
static IEdmModel GetEdmModel()
{
var builder = new ODataConventionModelBuilder();
builder.EnableLowerCamelCase();
return builder.GetEdmModel();
}
Set this in program, it works
Can you quickly check with the configurations of your 'TokenProvider' or 'JWTFilter' for token parsing or validations?
On standard Android, you can’t fully block the power button or shutdown via Android Device Management as it’s restricted at the system level.
That said, using kiosk mode via Android Enterprise (Device Owner) can limit user interaction. For advanced control, some MDMs like Samsung Knox, Scalefusion, or IBM MaaS360 (with OEM support) offer extended lockdown features.
If you call app.get('/')
without the @
decorator, FastAPI registers nothing. That means no route exists, so every request returns 404. This is the most common mistake:
# ❌ WRONG:
app.get('/')
def root():
return {'msg': 'hello'}
# ✅ CORRECT:
@app.get('/')
def root():
return {'msg': 'hello'}
Often, your script will include routers or static mounts, but if the decorators aren’t applied properly, nothing gets registered. Here's a robust minimal example that you can copy into main.py
and test:
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.get("/")
def hello():
return {"hello": "world"}
@app.get("/abc")
def abc():
return {"hello": "abc"}
Run it with:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Navigate to GET /
, /abc
, or /static/…
—they should all work. If GET /
still returns 404, re‑check your decorators.
If you're including an APIRouter
:
from fastapi import FastAPI, APIRouter
router = APIRouter(prefix="/items")
@router.get("/")
async def list_items():
return ["a", "b"]
app = FastAPI()
app.include_router(router)
Your route is reachable at /items/
, not /
. So GET /
→ 404, GET /items/
→ 200 with ["a","b"]
. This is another source of “missing” routes.
root_path
If you're hosting behind a proxy (Nginx, Traefik, API Gateway, etc.) that strips or adds leading path segments, FastAPI’s OpenAPI UI /docs or even paths can break. Use the root_path
feature:
Via code:
app = FastAPI(root_path="/myapp")
Via Uvicorn CLI:
uvicorn main:app --root-path "/myapp"
This ensures both routing and docs work with the prefixed path.
Check you’re in the correct working directory (project root).
Temporarily hardcode a simple root route:
@app.get("/")
def debug_root():
return {"ok": True}
Print the registered routes to verify:
for r in app.routes:
print(r.path, r.methods)
Then run your service and inspect output to know exactly what endpoints exist.
There is another solution to send mail with just a javascript SDK with out configuring the SMTP etc... install the SDK enter asked informations then call one function and mail sent. no spam, extremely secure, CORS handles etc... works on both server part and browser part
WebRTC expects SDP to follow RFC 4566, which mandates that each line ends with CRLF (\r\n), Just add \r\n at the end of every line. For ex-
"v=0\r\n" +
"o=- 0 0 IN IP4 127.0.0.1\r\n" +
"s=-\r\n" +
"t=0 0\r\n" +
"a=group:BUNDLE 0 1\r\n" + ..............
You don't really need to mess with array formulas. There is technically a simpler way. Let's imagine you have a category column in A and values in B and you want the max, GROUPBY, but you don't have the latest version of EXCEL... Assuming row 1 is your headers
In C2, type "=vlookup(A2,D:E,2,FALSE)"
In D2, type "=IF(E2="","",A2)
In E2, type "=IF(COUNTIFS(A:A,A2,B:B,">"&B2)=0,B2,"")
Repeat your formulas down the sheet. What did they do?
Column C says you want to look up your current category in the contents of column D and return the value next to it in E.
Column D says you want to display your category, ready for your lookup, but ONLY where there's a value in column E next to it.
Column E says you want to look up how many records there are that share the category in column A, but have a higher value than the current one. If that total is 0, return the value, otherwise leave it blank.
Simon.
The reason for the problem you are describing (generating a trigger in a doctrine migration) is most likely a problem concerning the delimiter.
Usually in SQL, when importing a larger SQL-file that contains triggers, the statements that generate the trigger look like this:
DELIMITER //
CREATE TRIGGER `MyTrigger` BEFORE DELETE ON `myTable` FOR EACH ROW BEGIN
DELETE FROM anotherTable WHERE pk = OLD.pk;
END//
DELIMITER ;
The "Delimiter //" and "// DELIMITER" commands are necessary to prevent SQL from interpreting the semicolon as the end-token to the CREATE TRIGGER command. It changes the end token temporaily to "//", such that the semicolon after OLD.pk ist interpreted as the end-token to the "DELETE FROM" statement.
This does not work when using the Method "addSql(...)" in your migration. The lines containing "DELIMITER //" and "//DELIMITER" must be ommited and everything works fine.
These commands are not necessary, because addSql always only accepts one single Sql-Statement at a time, such that it is clear that the semicolon belongs a statement in the Begin..End block of the trigger and not to the trigger itself.
The workaround with expode(...) does not work becaus addSql accepts only a string with only one Sql-Statement (and not an array of multiple SQL-statements) and because is does not strip the delimiter-commands before and after the trigger.
def get_edge_latest_version():
response = requests.get("https://edgeupdates.microsoft.com/api/products")
data = response.json()
for item in data:
if item['Product'] == "Stable":
for release in item['Releases']:
if release['Platform'] == "Windows" and release['Architecture'] == "x64":
version = release['ProductVersion']
download_link = release['Artifacts'][0]['Location']
return version, download_link
lodash seems to work better..
I'm working on a project using (electronjs)[https://www.electronjs.org/] and structuredClone did not do the job.
Are you running the server locally? If yes, then you need port-forwarding most probably.
Run `adb reverse tcp:3000 tcp:3000`, make sure you update both ports to the server's port in the command.
This can be used to "reconnect" the android device to the local machine, and I found it useful for example when my machine was sleeping for a longer amount of time and in several other scenarios.
For my own needs, I created a package that allows a combination of get and go_router. Later I created a package to help others. https://pub.dev/packages/getx_go
You should use one of oracle.jakarta.jms.AQjmsFactory.getConnectionFactory methods. It returns instance of jakarta.jms.ConnectionFactory
Can you place in a setting file; to tell visual studio code to look for these files in a Linux docker container ?
Dynamic attributes like .thumbnail
from StdImageField
may not be fully attached after .save()
or .create()
, causing pickling errors when caching. Use refresh_from_db()
to reload the instance and ensures these attributes are correctly bound.
I was facing the same issue with my Java application. The API worked just fine in Postman, but had the "PATCH method not allowed" exception when called through the Springboot application. I used this code to access this. FYI, I also tried adding ?_HttpMethod=PATCH
for the post method, but had no luck.
String url = UriComponentsBuilder.newInstance()
.scheme(protocol)
.host("your-salesforce-instance-url")
.path(apiVersionPath + "/sobjects/Case/" + caseId)
.toUriString();
PostMethod postMethod = new PostMethod(url) {
@Override
public String getName() {
return "PATCH";
}
};
postMethod.setRequestHeader(HttpHeaders.AUTHORIZATION, "Bearer xxxxxx");
ObjectMapper mapper = new ObjectMapper();
String body = mapper.writeValueAsString("your-json-request");
postMethod.setRequestEntity(new StringRequestEntity(body, ContentType.APPLICATION_JSON, "UTF-8"));
HttpClient httpClient = new HttpClient();
int statusCode = httpClient.executeMethod(postMethod);
Just in case it wasn't obvious (as it wasn't for me), we can add the GeomShadowText
from {shadowtext}
to the function `geom_sf_text()` from {sf}
in place of the existing geom = GeomText
argument.
geom_sf_shadowtext <- function(
mapping = aes(),
data = NULL,
stat = "sf_coordinates",
position = "identity",
...,
parse = FALSE,
nudge_x = 0,
nudge_y = 0,
check_overlap = FALSE,
na.rm = FALSE,
show.legend = NA,
inherit.aes = TRUE,
fun.geometry = NULL
) {
if (!missing(nudge_x) || !missing(nudge_y)) {
if (!missing(position)) {
cli::cli_abort(c(
"Both {.arg position} and {.arg nudge_x}/{.arg nudge_y} are supplied.",
i = "Only use one approach to alter the position."
))
}
position <- position_nudge(nudge_x, nudge_y)
}
layer_sf(
data = data,
mapping = mapping,
stat = stat,
geom = GeomShadowText,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list2(
parse = parse,
check_overlap = check_overlap,
na.rm = na.rm,
fun.geometry = fun.geometry,
...
)
)
}
I couldn't easily get the example in the original question to work due to API key issues, so here's a simpler working example:
library(ggplot2)
library(sf)
library(shadowtext)
library(rnaturalearth)
Africa <- ne_countries(continent = "Africa")
ggplot(data = Africa) +
geom_sf() +
geom_sf_shadowtext(mapping = aes(label = name_en))
I am writing this answer since I do not have enough reputation to write a comment yet.
I have found this post whilst having the same problem and tried to recreate my own problematic code, since this was asked for in the comments, so this is just what I think could be the problem, rather than a solution.
In my case, the problem is the display type.
The element containing the text will only stay as wide as the text itself when using display: inline.
But since using this is not always an option, I think what the original poster needs is a way to limit the width to the text with non-inline display attribute values and without using width: min-content.
<div style="width: 65px;background: black;">
<span style="display: block;background: gray;">Short Text</span>
</div>
The module path may need to be updated to include javafx.media
--add-modules javafx.media
To solve the issue follow these steps:
Create a table with a JSON format column. For example, a table named Calculation
with columns calculationNr
, date
, volume
, and calculation
.
Create a View using the following query to split the column containing JSON value and create separate fields as:
Create View SplitView As SELECT c.generalCal, c.position, c.counter,
JSON_VALUE(x.Value, '$. generalCal) as generalCal,
JSON_VALUE(x.Value, '$. position) as position,
JSON_VALUE(x.Value, '$. counter) as counter
FROM calculations c
CROSS APPLY OPENJSON(calcuation) as x
This query will create separate fields for generalCal
, position
, and counter
based on the JSON values in the calcuation
column.
Connect to SQL server and import the created view.
You will get three separate fields as you want in your given simplified table.
This guide will help you to do following sum.
SumOfValue | SumOfCounter1 | SumOfCounter2 |
---|---|---|
150 | 1000 | 800 |
40 | 25 | 88 |
Some other references:
In Visual Studio 2022, I don't see a way to directly start/stop profiling, but I do see a way to add "marks" to achieve the same thing: https://learn.microsoft.com/en-us/visualstudio/profiling/add-timeline-graph-user-marks
You can set marks in your code using the Microsoft.DiagnosticsHub namespace, and then once the data is collected, you can select the time between two collected marks to limit the profiling results to that time period.
track only the LLM process used by Ollama (e.g. Mistral) using psutil
. This gave me accurate CPU and RAM usage of just the language model, not for my whole whole system.
Finds a running process with name "ollama"
or "mistral"
Measures only its CPU + memory usage
Displays that alongside inference time
These steps work for me. tailwindcss v4.1.10 and angular 19.
https://tailwindcss.com/docs/installation/framework-guides/angular
I had regenerated the key store and triple checked the SHA1 and everything. Interestingly the Google One Tap showed up and allowed me to click the profile, and it would error out afterwards. When I used a SHA1 that was obviously invalid, the component errored out immediately.
I found a blog suggesting to use a 'Web' type OAUTH Client, instead of using the 'Android' one as suggested on most blogs and Claude. I left out the Url fields, and this worked!
TL;DR try use 'Web' Client instead of 'Android'
This gets the most recently created pod
:
kubectl get pods --sort-by=.metadata.creationTimestamp -o jsonpath="{.items[-1].metadata.name}"
Solution for IntelliJ 2025.1: Uncheck "Detect executable paths automatically."
Tags: docker-compose vs docker compose
Iface dthe same issue , does not work for me even after changing the file to logback-spring.xml.
pasting the error for your ref : 10:13:10,149 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@61:31 - no applicable action for [springProfile], current ElementPath is [[configuration][springProfile]]
10:13:10,149 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@62:29 - no applicable action for [root], current ElementPath is [[configuration][springProfile][root]]
10:13:10,149 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@63:46 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@64:57 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@68:32 - no applicable action for [springProfile], current ElementPath is [[configuration][springProfile]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@69:29 - no applicable action for [root], current ElementPath is [[configuration][springProfile][root]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@70:46 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@71:57 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
deployed in wsl -docker .
I also had the same issue but in my case the following did not resolve the error
implementation 'com.google.android.gms:play-services-safetynet:+'
then I checked my phone DNS settings which had a domain for AD blocks in phone then I turned off that DNS settings
Google Photos API - Deprecated
Photo Picker API - Only access to data created by that application - not all our pictures!
Takeout -Only way I think...(not pretty...)
I've been struggling with this error like forever. It must be some weird IntelliJ bug, because, at the beginning, when I went to File > Project structure > Platform settings > SDKs, it picked up the Oracle OpenJDK 21 that I had installed on my computer, but the Classpath tab was empty, it just said: nothing to show. And I had this error Kotlin: Cannot access 'java.io.Serializable' which is a supertype of 'kotlin.String'. Check your module classpath for missing or conflicting dependencies showing all the time. What I did was, from File > Project structure > Platform settings > SDKs, remove the JDK, then add it again from the directory where I had it installed, and then the Classpath tab showed some entries and the error went away.
In my case I replace the equal-width-constraint to width-constraint, and to set its constant value by calculating its reference, seems easier.
System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnBase`1.Microsoft.EntityFrameworkCore.Metadata.IColumnBase.get_ProviderValueComparer()
at Microsoft.EntityFrameworkCore.Migrations.Internal.MigrationsModelDiffer.Diff(IColumn source, IColumn target, DiffContext diffContext)+MoveNext()
at Microsoft.EntityFrameworkCore.Migrations.Internal.MigrationsModelDiffer.DiffCollection[T](IEnumerable`1 sources, IEnumerable`1 targets, DiffContext diffContext, Func`4 diff, Func`3 add, Func`3 remove, Func`4[] predicates)+MoveNext()
at System.Linq.Enumerable.ConcatIterator`1.MoveNext()
is this issue is already solve?
I got same issue when try to update to .net 6 to .net 8
Did you ever resolve this? Running into exactly the same issue
Build your Docker image and push it to a registry. Create Kubernetes Deployment and Service manifests to define how the container runs and is exposed. Use kubectl apply -f
to deploy them. Access the app via NodePort or LoadBalancer. You can automate the whole process using Jenkins pipelines.
If you're implementing role-based access in a MERN stack development project and want to designate yourself as the sole admin using userContext, a common pattern is to assign a default admin user manually in your seeding script or during user registration, then manage access logic in your middleware using JWT or context-based checks.
Here's a practical overview on how companies structure admin/user roles within mern stack web development applications — might offer additional clarity on structuring access and managing roles effectively:
🔗 https://techstory.in/future-ready-benefits-of-hiring-a-mern-stack-development-company
Would also recommend double-checking how userContext is passed through your protected routes. If you're using React Context on the frontend, make sure the server correctly validates and distinguishes roles based on the token or session data.
A Business Systems Analyst in a Data Warehouse application helps gather business needs, designs data models, and ensures accurate data for reports. They bridge business and tech teams, using tools like SQL and BI software. They also test and optimize data systems for better decisions. I recommend Datamites for training in these skills.
The question is quite old but still relevant and technology has changed. I am using WASM Web Tokens to secure my unauthenticated API. These are tokens generated in the browser using Webassembly and have shared secrets for the backend api to decrypt and verify. Webassembly being byte code is far harder to read than javascript
on:
workflow_run:
workflows:
- "CI + SonarQube Analysis"
types:
- completed
1st approach .Try to Give Simple name. might be name missmatching. for eg.CISonarQubeAnalysis 2nd appraoch : add type as completed as shown in ui
This is what Collectors are great for - you can collect data when the whole project is analysed and then evaluate them in a single rule invoked at the end of the analysis.
Learn more: https://phpstan.org/developing-extensions/collectors
Some great community packages are implemented thanks to Collectors, like https://github.com/shipmonk-rnd/dead-code-detector.
You're not alone in facing this 502 issue with AWS CloudFront + Google Cloud Run. This is a known pain point due to the subtle but critical differences in how CloudFront expects an origin to behave versus how Google Cloud Run serves responses.
Quick Summary of 502 Causes (Specific to CloudFront + Cloud Run)
CloudFront returns a 502 Bad Gateway when:
It can't understand the response from the origin (Cloud Run in this case)
There’s a TLS handshake failure, unexpected headers, timeout, or missing response headers
CloudFront gets a non-compliant response format (e.g., too long/short headers, malformed HTTP version)
Even though Cloud Run may respond with 200 OK directly, it does not guarantee compatibility with CloudFront's proxy behavior.
Likely Causes in Your Case
• Here are the most common and probable issues based on your setup:
• Cloud Run's HTTP/2 or Chunked Encoding Response
Problem: CloudFront expects HTTP/1.1 and may misinterpret Cloud Run's chunked encoding or HTTP/2 behavior.
Fix: Force Cloud Run to downgrade to HTTP/1.1 by putting a reverse proxy (like Cloud Run → Cloud Load Balancer or Cloud Functions → CloudFront) in between, or use a Cloud Armor policy with a backend service.
Missing Required Headers in Response
Problem: CloudFront expects certain headers (e.g., Content-Length, Date, Content-Type) to be present.
Fix: Log all outbound headers from Cloud Run and ensure the response is fully RFC-compliant. Use a middleware to enforce this.
Random Cold Starts or Latency in Cloud Run
Problem: Cloud Run can scale to zero, and cold starts cause delay. CloudFront times out quickly (~10 seconds default).
Fixes:
• Set min instances in Cloud Run to keep one container warm
• Optimize cold start time
• Increase CloudFront origin timeout (if using custom origin)
TLS Issues Between CloudFront and Cloud Run
Problem: CloudFront uses SNI-based TLS. If Cloud Run isn’t handling it as expected or certificate isn’t valid for SNI, 502 can result.
Fix:
• Use fully managed custom domains in Cloud Run with valid certs
• Check that your custom domain doesn’t redirect to HTTPS with bad certificate chain when coming from CloudFront.
Cloud Run Returns 404 or 500 Internally
Problem: If Cloud Run returns a 404/500, CloudFront may wrap this in a 502
Fix: Log actual responses from Cloud Run for all paths
Best Practice:
• Use a Layer Between CloudFront and Cloud Run
• Instead of connecting CloudFront directly to Cloud Run, use:
• Google Cloud Load Balancer (GCLB) with Cloud Run as backend
• Then point CloudFront to the GCLB IP or domain
This avoids a ton of these subtle issues and gives you more control (headers, TLS, routing).
Diagnostic Checklist
Item Status:
• Cloud Run always returns required headers (Content-Length, Content-Type, Date)
• Cloud Run has min instance (avoid cold starts)
• CloudFront origin protocol set to HTTPS only
• CloudFront timeout increased (origin read timeout = 30s or more)
• Cloud Run domain SSL cert supports SNI
• Logs from Cloud Run show successful 200s
• CloudFront logs show exact reason (check logs or enable logging to S3)
Community Reports
Many developers report intermittent 502s when using CloudFront + Cloud Run without a reverse proxy.
Some fixes:
• Moving to Google Cloud CDN instead of CloudFront
• Adding NGINX or Cloud Load Balancer in between
• Avoiding chunked responses and explicitly setting Content-Lengt
Suggested Immediate Actions
• Enable CloudFront logging to S3 to get more detail on the 502s
• Add a reverse proxy (NGINX or GCLB) between Cloud Run and CloudFront
• Force HTTP/1.1 response format from Cloud Run
• Set min_instances=1 to eliminate cold starts
• If nothing helps, consider using Google Cloud CDN for tighter integration with Cloud Run
If you want help debugging further:Please provide:Sample curl -v to Cloud Run endpoint
CloudFront response headers when 502 happens
Cloud Run logs during time of errorLet me know and I can walk you through fixing this definitively.
Check your NLog.config
to ensure the layout includes ${exception}
:
<target xsi:type="File" name="logfile" fileName="log.txt"
layout="${date} ${level} ${message}
${exception:format=ToString}
" />
You can achieve this by turning off the interactive option from the "Rating Bar Properties" section, and you can make use of decimal values such as 3.1
, 3.5
etc to control the star filling.
Without turning it off, it won't work
If you're building a dashboard in .NET 6 WinForms and looking for a modern, high-performance charting solution, you can try the Syncfusion WinForms Chart library.
It offers a wide variety of 45+ chart types including line, bar, pie, area, and financial charts.
Optimized for performance, it handles large datasets smoothly without lag.
Fully customizable with rich styling and interaction options like zooming, panning, tooltips, and annotations.
For more detailed information, refer to the following resources:
Demo: https://github.com/syncfusion/winforms-demos/tree/master/chart
Documentation: https://help.syncfusion.com/windowsforms/chart/getting-started
Syncfusion offers a free community license to individual developers and small businesses.
Note: I work for Syncfusion.
It might be related to a plugin or dependency's Kotlin Gradle Plugin version.
It is about initializers.
Without inline, the initializer will be generated to the module unit interface itself and consumers won't contain the initializer.
With inline, the initializers will be generated to the TU which uses it, and the initializers have weak linkage.
Generally the former is preferred. We want the later only if we want to do some hacks to replace the initializers.
The constructor is not directly responsible for creating the object, but it responsible for initializing the object after it is created by new keyword. The new keyword creates the object and allocates memory at Heap and its address is stored at Stack memory.
In www.andre-gaschler.com/rotationconverter
Euler angles (degrees) , the order should be "ZYX" , to match Rotation in scipy
.as_euler("xyz", degrees=True)
Just add , at the last and normally format the code.
ListView.builder(
itemBuilder: (BuildContext context, int index) {
return Text("Hello World XD");
}, // <- add this
),
why am i getting error why am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting errorwhy am i getting error
Simple \n worked for me
window.alert("Hello \n breaks the line);
Please check this once agin
I had a similar solution, but decided on creating a dictionary for the result. This works as well:
name_list = {}
input_str = input().strip().split()
name = input_str[0]
while name != '-1':
try:
age = int(input_str[1]) + 1
name_list[name] = age
except ValueError:
age = 0
name_list[name] = age
finally:
input_str = input().strip().split()
name = input_str[0]
for k, v in name_list.items():
print(f'{k} {v}')
This is the result:
if after trying all above sugestion , if terminal doesnt work then you should update your windows 10 version to latest for example you windwos 10 version might be at 1709 but now update to latest to 22H2.
what is did is i have installed fresh windows 10 and installed andorid studio meerkat feature drop , is shocked to see that my terminal was not working. i checked java home in enviroment settings, checked for terminal settings in android studio set the directory to cmd.exe , also set the use legacy in cmd, run android studio as administrator. i was fed up with all the solution which was all write but wasint working for me. after some time i was trying to install whatsapp latest version to windows , but was getting error, then i thaught, this might be the same reason my android studio not working well. then http://microsoft.com/en-us/software-download/windows10 i used this link to update my pc. it took some time to update. and when i reopened android studio terminal is working well !. and now iam able to install whatsapp too. great enjoy.
glDrawArrays
's second parameter is the number of indices. Thus the correct call here is:
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
Try adding default at the end as shown below,
const fetch = require('node-fetch').default;
When using require('node-fetch'), you must access the default property
Please consider using our CheerpJ Applet Runner extension for Edge (free for non-commercial use). It is based on CheerpJ, a technology that allows to run unmodified Java applets and applications in the browser in HTML5/JavaScript/WebAssembly.
Full disclosure, I am CTO of Leaning Technologies and lead developer of CheerpJ
You can achieve this with the help of below modifications.
Change your TextInput control mode to Multiline Text.
Add a Slider Control in the App
Add the below code on OnChange Property of slider control.
If(EndsWith(TextInput.Text,Char(10)),Select(SubmitBtn))
It appears that Plotly v6.0.0 conflicts with Jupyter NB. I downgraded as clearly suggested in a post that I found after asking the question here: Plotly displaying numerical values as counts instead of its actual values?
This is possible by right clicking on the console tab, then going to the option `New console in environment` and selecting your environment there.
I know probably this is not current trending topic but I found the solution: https://www.kaggle.com/code/ksmooi/langchain-pandas-dataframe-agent-guide
You're facing an issue where Terraform hangs when trying to destroy the default VPC. This is a known behavior because AWS does not allow Terraform to delete the default VPC using the aws_default_vpc resource. This resource only manages default tags and settings—it doesn't delete the default VPC.
Why terraform destroy hangs on aws_default_vpc
The aws_default_vpc resource does not support deletion.
Even with force_destroy = true, this attribute is not valid for aws_default_vpc.
Terraform keeps trying because it assumes it can delete it—but AWS silently prevents it.
Recommended Solutions
1. Use the AWS CLI or Console to Delete It
You must manually delete the default VPC (if allowed) via AWS Console or AWS CLI:
aws ec2 delete-vpc --vpc-id vpc-0e1087cdb9242fc99
But note: AWS sometimes recreates default VPCs automatically, or doesn’t allow deletion in some regions.
2. Update Terraform Code to Stop Managing the Default VPC
Remove the block from your Terraform code entirely:
# Delete or comment out this block
# resource "aws_default_vpc" "default_vpc" {
# tags = {
# Name = "default-vpc"
# }
## }
Then run:
terraform apply
To detach from managing the default VPC
Alternative: Use Data Source Instead
If you need to reference the default VPC but not manage it, use:
data "aws_vpc" "default" {
default = true
}
Clean Way Forward
If your goal is a custom VPC setup, it’s best to:
Ignore the default VPC.
Use aws_vpc to create your own from scratch.
Use terraform state rm to remove aws_default_vpc.default_vpc from state if it’s stuck:
terraform state rm aws_default_vpc.default_vpc
Task Supported in Terraform? Workaround
Delete default VPC and Use AWS CLI/Console
Manage default VPC tagsUse aws_default_vpc
Prevent Terraform hanging (must remove) Remove block + state rm
Reference default VPC safely Use data "aws_vpc"
For EKS version 1.33 i faced the same issue and resolved by adding a username in mapUsers in:
kubectl edit configmap -n kube-system aws-auth.
This change made it work.
Yes, you're absolutely on the right track. Using EditText
with a TextWatcher
to detect the @
symbol, showing a popup with user suggestions, and then inserting a styled span using setSpan()
— this is the standard and correct way to implement @mention functionality in Android. Android doesn’t have a built-in component that does this out of the box, so a custom implementation like yours is common and totally valid.
That said, there are some helpful open-source libraries like SocialView that can make parts of this process easier, especially when it comes to detecting mentions and managing spans. But if you need fine control (like custom styling and storing user IDs), your current custom approach is usually the best way to go.
Hello Aburaddaha Abdalmalek,
Are you able to find the issue why all metrics are zero. Even i am having the same issue with coco.
If we do it this way, when we build the app using apk/aab, it becomes too heavy, can't we make it lighter?
You're comparing strings to strings (i.e. "School Type" = 'District Community Day Schools'
is always False), you should use [column name here]
instead or not use spaces (eg. school_type
) after which you'll get an error as such sqlite3.OperationalError: no such column: <column name goes here>
According to the specification, a ZIP file is a concatenation of the file data, some meta information per file, and an index containing metadata and the locations of the files inside the archive. It is possible to create ZIP archives using only T-SQL. Here is a POC.
Part of the metadata is a CRC32 code of each file – it can be calculated once per file and used in different zip archives.
You need to set up a TURN server in combination with a WS server (for signaling) or use a third party service like Agora. TURN and WS are very easy to setup even on a VPS. I have been working in this field for 5 yrs now. I can walk you through the entire process. It will require a lot of trial and error (both the fronts viz Server and code) but it's doable. Recordings can be saved on device as well as on the server.
i also faced similar issue today where i was not able to login to Linux Server and getting same error.
"kex-exchange-identification-read-connection-reset-by-peer"
So this was happening because my /etc/group file was blank somehow. and after restoring the file from backup and then restarted the sshd service sorted the issue.
i have a same issue in worker service .Net 8
then I removed the manual dll <refrence> and added <packageRefrence> for
"System.ServiceProcess.ServiceController" it resolved.