8
Flutter is an SDK developed by Google. It's a Dart library built to provide GUIs with native look & feel. This is achieved via the Flutter Engine (using Google ), last I saw built in C++. However, interfacing calls to the native platform are done via their specific SDKs (and native languages, such as Swift and Kotlin/Java).
Dart is both a programming language and a platform (Dart VM). And it can be run in many ways:
Dart Virtual Machine: On Windows, macOS and Linux, using Just-in-Time.
Native: Using dart2native, Dart can be compiled to self-contained, single file, native executables.
JavaScript: Using a source-to-source compiler, Dart code converts to JavaScript and can be run in most web browsers.
AOT / Ahead-of-Time: This is fully native to mobile platforms (iOS / Android) and used mostly for delivery to app stores.
I am having the same problem. Here is some sample code. As can be seen from the results, the code outside of the test is called twice. Once before the actual test is run and once after. Only on the second pass will the code inside the test scope is executed. I assume that this is due to Playwright scanning for tests before executing the actual tests.
Like Drashty, I am creating parameterized tests so the parameter data has to come from outside of the test block. The problem with this double pass is that the parameter retrieval routines are being called twice.
I have not been able to tap on the internal context to prevent a second call to the retrieve parameter data routine. As can be seen below, the context does not seem to be preserved across the two passes (the iterator value is the same).
Question: how can I tap into the Playwright context so that I can set a boolean flag that will prevent pre-test code from executing twice?
import { test } from '@playwright/test';
let iterator: number = 0;
console.log(`Pre Test step: ${iterator}`);
iterator = iterator + 1;
test(`testing with id: a test`, () => {
console.log(`dummy step`);
});
Results:
The answer was too easy for me to see it right before my eyes. I needed to add:
width: '100%',
to the TouchableHighlight's style.
Check out the new Snack
<https://snack.expo.dev/@rjapenga/touchablehighlight-fixed>
I was under the mistaken impression that "flex: 1" meant that the item would grow to the size of the available container both vertically and horizontally and thus I never tried width. For the life of me, I cannot find where the React Native documentation tells me that:
"Flexbox works the same way in React Native as it does in CSS on the web, with a few exceptions."
<https://reactnative.dev/docs/flexbox>
Their example seems to imply that I don't need to say width: '100%' and I cannot find where the explain this particular exception.
So, your intent is to have Form A open, click a button and Form A closes and Form B appears.
The problem you have is that Form A happens to be the Main form of you application (the one that is set to run on start up). Whenever the main form is closed the application will try to shut down and will close all other forms.
If you open Form A from your Main form and then try to swap open forms it should work.
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Yos Wangsaf
It may not be the quickest solution, but could be helpful to assign each item an integer ID as a Primary Key, and have the QR codes populate that integer (if this is something you can do/change). I have an app set up in a similar manner and have not had this issue, wondering if the omission of a string as the variable will help.
Even I had the issue with this but my project is in modular structure with native federation any help is much appreciated
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app/app.module';
platformBrowserDynamic().bootstrapModule(AppModule);
@NgModule({
declarations: [
AppComponent,
// DialogComponent,
IEAlertComponent,
InformativeBannerComponent,
// ErrorModalComponent,
NotificationBannerComponent,
// EmailPreviewDialogComponent,
// TestEmailDialogComponent,
FooterComponent,
// MultiSearchComponent,
],
bootstrap: [AppComponent],
imports: [
FormsModule,
ReactiveFormsModule,
BrowserModule,
CommonsComponentsModule,
// ChatbotModule,
// FiltersModule,
AppRoutingModule,
BrowserAnimationsModule,
ContactDrawerModule,
HelpDrawerModule,
JsonSchemaFormModule,
MaterialDesignFrameworkModule,
DdcMastheadModule,
DdcSidenavModule,
DdcBannerModule,
DdcLoadingModule,
DdcConfirmationModalModule,
DdcAlertMessageModule
],
providers: [
provideHttpClient(withInterceptorsFromDi()),
{
provide: HTTP_INTERCEPTORS,
useClass: HttpErrorInterceptor,
multi: true,
},
{
provide: APP_INITIALIZER,
useFactory: autherize,
deps: [UserService],
multi: true,
},
ScrollService,
RouteStatusService,
]
})
export class AppModule { }
Click the little arrow next to the copilot icon on the taskbar at the top, there you will find "Configure code completions".
Clicking on that will take you to
click on disable completions and you are all set.
A stupid method that works for me on DataGrip2024 on MacOS
Since I saved the query into consoles, this is the folder to find all stored consoles.
/Users/YOURUSERNAME/Library/Application Support/JetBrains/DataGrip2024.2/consoles
Just need to go through those uuid folders to find the correct console.sql, then use the folder name(uuid) to locate the password you are looking for in the keychain like this example below:
IntelliJ Platform DB — 892a71d8-e6ab-4a3e-9f3c-4ccc0d1bd2e2
import 'package:flutter/material.dart';
void main() { runApp(HanaKoreanBhashaApp()); }
class HanaKoreanBhashaApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Hana Korean Bhasha', theme: ThemeData( primarySwatch: Colors.orange, ), home: HomePage(), ); } }
class HomePage extends StatelessWidget { final String appDescription = 'Korean UBT App is a complete and user-friendly mobile application designed especially for EPS-TOPIK (Ubiquitous-Based Test) preparation. It offers a realistic and updated test format that includes a total of 40 questions—20 reading and 20 listening—exactly like the official UBT exam. The reading section helps you improve your grammar, vocabulary, and comprehension through structured practice sets, while the listening section includes high-quality native Korean audio with workplace and daily life conversations to test your understanding. This app is ideal for Nepali users who want to go to South Korea for employment under the EPS system. It provides full mock tests, daily vocabulary, model question sets, and progress tracking, making it the smartest and easiest way to prepare from anywhere. Whether you’re a beginner learning the basics or an advanced student revising for the exam, this app supports your journey with both offline and online features. Key features include full 40-question mock tests, instant result scoring, grammar and sentence explanation in Nepali and English, pronunciation tips, and regular vocabulary updates. The app also offers language options in Nepali, Korean, and English, so that users can learn in the language they’re most comfortable with. With a simple interface and easy navigation, Korean UBT App turns your mobile phone into a smart Korean learning center. Start practicing anytime, anywhere—even without internet—and track your improvement over time. With Korean UBT App, your EPS-TOPIK preparation becomes faster, smarter, and more effective. Download now and take the first step toward your dream of working in Korea.';
@override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title
Publish now fails out of the box. I uninstalled and re-installed Visual Studio 17.4.6 then created a .Net 8 console app then created a publish to folder. I simply clicked thru without changing anything. Project builds fine but Publish fails with "Publish has encountered an error. We were unable to determine the cause of the error. Check the output log for more details." Publishing "Hello, World!" should just work.
I can't comment on the post yet, however if anyone is still having this issue with SSMS21 then just make sure that you select the "Business Intelligence" workload with the optional components.
I have same issue. Really annoying. Bug seems to be in the html extention, in my case, maybe for other extentions too
I open in Notepad document.html and it opens as ISO-8859-1, even if saved in UTF-8
I save document.html (forced to UTF-8, looking good) as document.txt
I close document.txt
I re-open document.txt and it is OK, ndp recognizes UTF-8
But... I re-save document.txt onto document.html (original name or any other name with html extention), when I re-open document.html ndp reopens it in ISO-8859-1. Bad.
I tried simply renaming (not saving as) document.txt file to document.html and same issue opening it in ndp. Renaming in Windows explorer doesn't fix it.
So I guess there must be some hidden special chars that make ndp bug, because it only happens to me with specific html files, not for all html files. And I found no way to detect those bad chars on ndp. There must be an option on grep command on Linux, as in link . It could be a Windows system problem too, as ndp uses his libraries.
a similar question both the row and column have header and first four columns has names and I want to remove the data frames in a list of thousand data frames where all columns except first four have no values that is .. in the column.
The code you provided looks correct.
Two things to check.
1. Make sure you did the allowlisting in steps 3 and 4
https://docs.chain.link/ccip/tutorials/evm/programmable-token-transfers#deploy-your-contracts
2. Are you sending tokens with your data? If not, that can cause a revert as well.
I was new to Load Balancer. I managed to make my Elastic Beanstalk work and host my backend by turning Public IP
Enabled in the my elastic-beanstalk environment's configuration. Certainly, this QnA & this article were helpful after going through bunch of webpages. So, the Instance has not sent any data since launch. issue was solved via this solution.
However, the Severe status is still there but with different errors (I tried changing Proccess's port to Port 80
and Protocol to HTTP
):
100.0 % of the requests are erroring with HTTP 4xx. Insufficient request rate (12.0 requests/min) to determine application health.
Process default has been unhealthy for 12 minutes (Target.ResponseCodeMismatch).
But, these issues are pretty much expected since I am hosting my backend which doesn't have any method or page at it's root endpoint i.e. /
. Therefore, when healthchecks is requesting via GET
method at my-env.us-east-1.elasticbeanstalk.com/
, it returns 400 status
. To solve this, you need GET
method at your root endpoint which must be free from any authorization wall. Since I don't need it and hosting only backend, I changed my HTTP code
in Process to 400-499
from 200
to skip this and resolve Target.ResponseCodeMismatch
.
T ।
°education of resources in
Classy
v5.3.31 update, June 2025;
When trying to upgrade to the latest version (v5.3.31 as of 23/06/2025), I came across similar issues as with previous versions (above).
It looks like there's some lookups for locale in the code, but none worked for me if I used "#locale=" or "&locale=".
So I've added a similar workaround as above, but updated for the latest version;
In PDFjs/web/viewer.mjs;
Find localProperties, line 615;
localeProperties: {
value: {
lang: navigator.language || "en-US"
},
kind: OptionKind.BROWSER
},
Add an extra lookup above to check the URL for any locale's passed through, so change line 615+ to match;
localeProperties: {
value: {
lang: new URLSearchParams(window.location.search).get("locale") || navigator.language || "en-US"
},
kind: OptionKind.BROWSER
},
If there's a better way, someone do please let me know, but this small workaround resolved the issue for me.
By commenting the line DriverManager.getDriver().manage().window().maximize();
in my browser calling function, browser is maximised.
My headless code is mentioned in my question.
The problem is that by having fill aesthetics in ggplot(), you are telling stat_pvalue_manual that fill = Temperature, which it doesn't know what to do with because Temperature doesn't exist in contrasts.means.species.temp. Move the fill aesthetic to geom_boxplot instead.
Results, but an unresolved issue:
As noted in the comments, I used an index.yaml file and the following command to add the index:
gcloud datastore indexes create index.yaml --database=mydatabase
After a short time, the indexes appeared in the Console, but but for many hours, the Size and Entries columns were blank. Finally, after over 24 hours, the indexes are populated.
In the meantime, I found a bug in my code. [Of course; things are never straightforward.] As of this moment, my application is working. However
This command still doesn't work
gcloud datastore indexes list --project="myproject"
The response is still
ERROR: (gcloud.datastore.indexes.list) Projects instance [myproject] not found: Project 'myproject' does not exist.
Finally, the indexing process seems shockingly slow for a test database of this size. Here is the data from the Datastore Dashboard:
Entity count | 530 |
---|---|
Built-in index size | 956.57 KB |
Composite index size | 251.49 KB |
Data size | 787.74 KB |
Total size | 1.95 MB |
Last updated | Jun 22, 2025, 9:00:00 AM |
I am going to mark this as answered. But if you are reading this because you have a similar problem, I can offer only a few ideas that might help:
Do not use any "optional" fields in your composite indexes. RE:
https://cloud.google.com/datastore/docs/concepts/indexes#index_definition_and_structure
"An entity is included in the index only if it has an indexed value set for every property used in the index; if the index definition refers to a property for which the entity has no value, that entity won't appear in the index and hence will never be returned as a result for any query based on the index."
Make sure that no queries depend on fields that are not in your composite indexes.
https://cloud.google.com/datastore/docs/concepts/queries#query_interface
"The properties being filtered on must have a corresponding predefined index which can be defined in your index configuration file"
Be patient; the indexing process takes a long time. I think the update to the dashboard statistics takes even longer.
It could be totally unrelated.
In my case, I was missing a code behind handler.
define('db_host', 'localhost');
define('db_user', 'root');
define('db_password', '');
define('db_database', 'dairy');
define ('PAGE_URL', 'http://localhost/Dairy/');
I reproduced the structure of the two files you described on playcode.io and there are no errors ( you can test it at this link: https://playcode.io/2433719 ).
Maybe you should provide us with more information so we can help you more.
Set the write deadline in the /reports/*
handlers using a response controller.
For example, add this line to the handler to set the timeout to 10 minutes:
http.NewResponseController(rw).SetWriteDeadline(time.Now().Add(10 * 60 * time.Second)
The idle timeout is not relevant to your problem.
I tried to find that Enviroment section in the Build&Deploy when deploying my React project and it worked for me thanks!!
I was inattentive when looking through workflow run logs, because there were messages giving a hint on what needs to be adjusted:
The build scan was not published due to a configuration problem.
The Gradle Terms of Use have not been agreed to.
For more information, please see https://gradle.com/help/gradle-plugin-terms-of-use.
I looked through https://gradle.com/help/gradle-plugin-terms-of-use, and applied the following changes:
Added com.gradle.develocity
plugin to my settings.gradle:
plugins {
id 'com.gradle.develocity' version '4.0.2'
}
Added develocity
config to my build.gradle:
develocity {
buildScan {
termsOfUseUrl = "https://gradle.com/help/legal-terms-of-use"
termsOfUseAgree = "yes"
}
}
And now my workflow publishes Build Scans:
to test any plist use the command plutil:
plutil -lint ~/Library/LaunchAgents/yourplistfile.plist
answer should be something like yourplistfile.plist: OK
Can you trying using one of these flags while starting the browser?
--start-maximized Starts the browser maximized, regardless of any previous settings.
or
--start-fullscreen Specifies if the browser should start in fullscreen mode, like if the user had pressed F11 right after startup.
More Info:
When you build HTML as a string in Angular without using sanitizer.bypassSecurityTrustHtml
, Angular will sanitize the content for security. That means it removes certain elements it considers risky — like <input>
buttons — to protect against attacks.
That’s why:
Elements like <a>
and <br>
still show up — they’re safe.
The <input type="button">
does not show — Angular blocks it.
When you use sanitizer.bypassSecurityTrustHtml
, you’re telling Angular:
I know this HTML is safe — don’t filter it.
So Angular keeps everything, including the input.
This post is a bit old, but I personally believe that data shouldn't be retrieved from the database via OnInitialized / OnInitializedAsync when server pre-rendering is active. Depending on the size of the result, you might end up waiting twice. That's inefficient. I've decided to use OnAfterRender / OnAfterRender instead.
public partial class ReordEntities
{
private IList<IRecord> Records { get; set; } = new List<IRecord>();
override protected OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
// Load data in here
Records = await LoadRecords();
// Rerender UI
StateHasChanged();
}
}
}
Don't forget to call StateHasChanged();
, otherwise the components won't be aware of the loaded data. And make sure you call StateHasChanged();
within the if statement, otherwise a render loop will be created.
If loading the data takes 10 seconds, it makes a difference to me whether I wait 10 or 20 seconds. And in general, I think we shouldn't forget that database resources are quite expensive, and if I can avoid pointlessly retrieving the data from the database twice, I'll do it.
Best regards,
Marcus
Try customizing Additional CSS
button.ast-menu-toggle {
width: 100%;
text-align: right;
}
I struggled with this for 2 days and finally figured out the issue.
If you run the `file` command on this file, you will probably get the following result: ASCII text.
In our case, the library was pushed to version control. But due to the binary being quite large, it was stored using git LFS. Pulling these files using `git lfs fetch` and `git lfs pull` resolved the issue.
According to the link in the error message (https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset), you should rather change the implementation of the __iter__
method so that it behaves differently based on which worker calls it, or change the worker_init_fn
(see their two code examples).
Should I modify it after the fact in each worker so that each worker gets a dataset that's 4 times smaller?
Yes, from what I understand, this will make each worker fetch `1 / num_workers` of the dataset.
import datetime
now = datetime.datetime.now()
tz = datetime.timezone(datetime.timedelta(hours=-6), name="CST")
now_tz = now.replace(tzinfo=tz)
now_tz.isoformat("#", "milliseconds")
When you hit the /actuator/heapdump
endpoint:
A full GC (Garbage Collection) is often triggered before or during the heap dump process.
This is to ensure that the heap dump reflects the most accurate state of live objects.
The JVM tries to clean up as much as possible before writing the dump to reduce file size and improve clarity.
This GC can significantly reduce memory usage, especially if there was a lot of garbage (unreferenced objects) in memory.
The heap dump itself does not clear memory, but the GC that precedes it does.
Try adding become: true
in your play
Thankfully its working now !!
Fix #1
You just have to zip ONLY the files and upload it on lambda function else it won't be able to find your file.
Fix #2
I did switch the regions in between which could potentially cause misunderstanding so i deleted existing functions and API gateways and made new one.
Fix #3
I replaced the Access-Control-Allow-Origin: of api gateway from "*" to my deployed URL link.
Fix #4
I increase the Lambda timeout to match or closely align with API Gateway’s 29s limit.
Estava com o mesmo erro, e essa dica me ajudou(uso a versão 0.21.2)
body {background-image:
radial-gradient(closest-corner circle at -10% 15%, #D28CDE 0%, rgb(249, 249, 249, 1)300%,transparent),
radial-gradient(closest-corner circle at 100% 10%, #7A5AC7 0%,rgb(92, 50, 180, 0.01) 400%,transparent);
background-color: #f9f9f9;
}
Thanks
When I get this error it is beacuse I get an Out Of memory (OOM) error, so the training is taking more GBs of RAM/GPU than the available, and then the operative system kills the process. Could this be happening you?
The answer from Дмитрий Винник worked for me but I needed to install selenium-manager first.
conda install conda-forge::selenium-manager
To access any files in your repository, the workflow first need to checkout this repository.
Add the following step above any steps that require accessing files from your repository:
- name: Checkout repository
uses: actions/checkout@v4
source of the underlying action: actions/checkout
I'm having the same error.. Have you find a solution ?
Your system may not install automaticaly podman-machine
with podman
. I recommend that you check if its installed or try installing it regardless.
For anyone finding this (like I did) while trying to solve this problem today, this is the best solution I could come up with:
Cascading Parameter Example | Tableau Public
The basic mechanics (take a look at the public workbook above to see it in action):
Separate sub_parameters for each main_parameter option
All sub_parameters are floating and stacked on top of each other on the dashboard
Visibility of sub_parameters is controlled with the "Control visibility using value" setting on the Layout tab. This points to a separate calculated boolean field for each sub_parameter so that only the appropriate one is showing at any given time
A final calculated field chooses the correct sub_parameter based on the main_parameter selection.
Same here
I am also looking for the solution.
Tried too much but still not getting any solution
You should use Vuforia Engine 11.2+ . The older versions do not support Unity 6 (see here https://developer.vuforia.com/news)
It doesn’t seem to be a widely recognized pattern, so it's probably a custom blend of MVVM and MVP. Think of it like using MVVM’s ViewModel for state and data-binding, while also having a Presenter (like in MVP) to handle user interactions, navigation, or coordination logic. The View connects with the ViewModel for state, and the Presenter takes care of the flow and event handling. This kind of setup helps keep your ViewModels clean and easier to test. I’d suggest checking how the View, ViewModel, and Presenter are wired up in your codebase, it’ll help clarify things. Also, maybe ask your teammates if there’s any internal architecture diagram, they might already have one shared.
In the initial days, the metadata (key ranges to partition ID map) is stored in the DynamoDB itself. The router used to download entire metadata -- Caused spikes!!!
Later, AWS built MemDS to store the metadata.
Redis offers Redis Data Integration (RDI) for this. With RDI you can sync your Redis with the Postgres tables you want and transform the data to any Redis data type you want without coding.
I am from Ukraine and I use the interface as in the picture. Please help me solve this issue.
For me the following, usually helpful, import turned out to be the culprit:
import findspark
findspark.init()
First enroll the device > create a compliance policy for unmanaged device > put it through conditional access.
Migration takes time! let's not hurry.
The left table should be the bigger one and the right table the smaller one for broadcast join to be considered in spark. It is explained very nicely in this thread. Refer the top rated answer in the link below.
Broadcast join in spark not working for left outer
I also had the same issue. The solution was to disabled customized SMTP... and everything else worked successfully.
Go to supabase, authentification> Emails> SMTP settings, and deactivate and save changes.
static IEdmModel GetEdmModel()
{
var builder = new ODataConventionModelBuilder();
builder.EnableLowerCamelCase();
return builder.GetEdmModel();
}
Set this in program, it works
Can you quickly check with the configurations of your 'TokenProvider' or 'JWTFilter' for token parsing or validations?
On standard Android, you can’t fully block the power button or shutdown via Android Device Management as it’s restricted at the system level.
That said, using kiosk mode via Android Enterprise (Device Owner) can limit user interaction. For advanced control, some MDMs like Samsung Knox, Scalefusion, or IBM MaaS360 (with OEM support) offer extended lockdown features.
If you call app.get('/')
without the @
decorator, FastAPI registers nothing. That means no route exists, so every request returns 404. This is the most common mistake:
# ❌ WRONG:
app.get('/')
def root():
return {'msg': 'hello'}
# ✅ CORRECT:
@app.get('/')
def root():
return {'msg': 'hello'}
Often, your script will include routers or static mounts, but if the decorators aren’t applied properly, nothing gets registered. Here's a robust minimal example that you can copy into main.py
and test:
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.get("/")
def hello():
return {"hello": "world"}
@app.get("/abc")
def abc():
return {"hello": "abc"}
Run it with:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Navigate to GET /
, /abc
, or /static/…
—they should all work. If GET /
still returns 404, re‑check your decorators.
If you're including an APIRouter
:
from fastapi import FastAPI, APIRouter
router = APIRouter(prefix="/items")
@router.get("/")
async def list_items():
return ["a", "b"]
app = FastAPI()
app.include_router(router)
Your route is reachable at /items/
, not /
. So GET /
→ 404, GET /items/
→ 200 with ["a","b"]
. This is another source of “missing” routes.
root_path
If you're hosting behind a proxy (Nginx, Traefik, API Gateway, etc.) that strips or adds leading path segments, FastAPI’s OpenAPI UI /docs or even paths can break. Use the root_path
feature:
Via code:
app = FastAPI(root_path="/myapp")
Via Uvicorn CLI:
uvicorn main:app --root-path "/myapp"
This ensures both routing and docs work with the prefixed path.
Check you’re in the correct working directory (project root).
Temporarily hardcode a simple root route:
@app.get("/")
def debug_root():
return {"ok": True}
Print the registered routes to verify:
for r in app.routes:
print(r.path, r.methods)
Then run your service and inspect output to know exactly what endpoints exist.
There is another solution to send mail with just a javascript SDK with out configuring the SMTP etc... install the SDK enter asked informations then call one function and mail sent. no spam, extremely secure, CORS handles etc... works on both server part and browser part
WebRTC expects SDP to follow RFC 4566, which mandates that each line ends with CRLF (\r\n), Just add \r\n at the end of every line. For ex-
"v=0\r\n" +
"o=- 0 0 IN IP4 127.0.0.1\r\n" +
"s=-\r\n" +
"t=0 0\r\n" +
"a=group:BUNDLE 0 1\r\n" + ..............
You don't really need to mess with array formulas. There is technically a simpler way. Let's imagine you have a category column in A and values in B and you want the max, GROUPBY, but you don't have the latest version of EXCEL... Assuming row 1 is your headers
In C2, type "=vlookup(A2,D:E,2,FALSE)"
In D2, type "=IF(E2="","",A2)
In E2, type "=IF(COUNTIFS(A:A,A2,B:B,">"&B2)=0,B2,"")
Repeat your formulas down the sheet. What did they do?
Column C says you want to look up your current category in the contents of column D and return the value next to it in E.
Column D says you want to display your category, ready for your lookup, but ONLY where there's a value in column E next to it.
Column E says you want to look up how many records there are that share the category in column A, but have a higher value than the current one. If that total is 0, return the value, otherwise leave it blank.
Simon.
The reason for the problem you are describing (generating a trigger in a doctrine migration) is most likely a problem concerning the delimiter.
Usually in SQL, when importing a larger SQL-file that contains triggers, the statements that generate the trigger look like this:
DELIMITER //
CREATE TRIGGER `MyTrigger` BEFORE DELETE ON `myTable` FOR EACH ROW BEGIN
DELETE FROM anotherTable WHERE pk = OLD.pk;
END//
DELIMITER ;
The "Delimiter //" and "// DELIMITER" commands are necessary to prevent SQL from interpreting the semicolon as the end-token to the CREATE TRIGGER command. It changes the end token temporaily to "//", such that the semicolon after OLD.pk ist interpreted as the end-token to the "DELETE FROM" statement.
This does not work when using the Method "addSql(...)" in your migration. The lines containing "DELIMITER //" and "//DELIMITER" must be ommited and everything works fine.
These commands are not necessary, because addSql always only accepts one single Sql-Statement at a time, such that it is clear that the semicolon belongs a statement in the Begin..End block of the trigger and not to the trigger itself.
The workaround with expode(...) does not work becaus addSql accepts only a string with only one Sql-Statement (and not an array of multiple SQL-statements) and because is does not strip the delimiter-commands before and after the trigger.
def get_edge_latest_version():
response = requests.get("https://edgeupdates.microsoft.com/api/products")
data = response.json()
for item in data:
if item['Product'] == "Stable":
for release in item['Releases']:
if release['Platform'] == "Windows" and release['Architecture'] == "x64":
version = release['ProductVersion']
download_link = release['Artifacts'][0]['Location']
return version, download_link
lodash seems to work better..
I'm working on a project using (electronjs)[https://www.electronjs.org/] and structuredClone did not do the job.
Are you running the server locally? If yes, then you need port-forwarding most probably.
Run `adb reverse tcp:3000 tcp:3000`, make sure you update both ports to the server's port in the command.
This can be used to "reconnect" the android device to the local machine, and I found it useful for example when my machine was sleeping for a longer amount of time and in several other scenarios.
For my own needs, I created a package that allows a combination of get and go_router. Later I created a package to help others. https://pub.dev/packages/getx_go
You should use one of oracle.jakarta.jms.AQjmsFactory.getConnectionFactory methods. It returns instance of jakarta.jms.ConnectionFactory
Can you place in a setting file; to tell visual studio code to look for these files in a Linux docker container ?
Dynamic attributes like .thumbnail
from StdImageField
may not be fully attached after .save()
or .create()
, causing pickling errors when caching. Use refresh_from_db()
to reload the instance and ensures these attributes are correctly bound.
I was facing the same issue with my Java application. The API worked just fine in Postman, but had the "PATCH method not allowed" exception when called through the Springboot application. I used this code to access this. FYI, I also tried adding ?_HttpMethod=PATCH
for the post method, but had no luck.
String url = UriComponentsBuilder.newInstance()
.scheme(protocol)
.host("your-salesforce-instance-url")
.path(apiVersionPath + "/sobjects/Case/" + caseId)
.toUriString();
PostMethod postMethod = new PostMethod(url) {
@Override
public String getName() {
return "PATCH";
}
};
postMethod.setRequestHeader(HttpHeaders.AUTHORIZATION, "Bearer xxxxxx");
ObjectMapper mapper = new ObjectMapper();
String body = mapper.writeValueAsString("your-json-request");
postMethod.setRequestEntity(new StringRequestEntity(body, ContentType.APPLICATION_JSON, "UTF-8"));
HttpClient httpClient = new HttpClient();
int statusCode = httpClient.executeMethod(postMethod);
Just in case it wasn't obvious (as it wasn't for me), we can add the GeomShadowText
from {shadowtext}
to the function `geom_sf_text()` from {sf}
in place of the existing geom = GeomText
argument.
geom_sf_shadowtext <- function(
mapping = aes(),
data = NULL,
stat = "sf_coordinates",
position = "identity",
...,
parse = FALSE,
nudge_x = 0,
nudge_y = 0,
check_overlap = FALSE,
na.rm = FALSE,
show.legend = NA,
inherit.aes = TRUE,
fun.geometry = NULL
) {
if (!missing(nudge_x) || !missing(nudge_y)) {
if (!missing(position)) {
cli::cli_abort(c(
"Both {.arg position} and {.arg nudge_x}/{.arg nudge_y} are supplied.",
i = "Only use one approach to alter the position."
))
}
position <- position_nudge(nudge_x, nudge_y)
}
layer_sf(
data = data,
mapping = mapping,
stat = stat,
geom = GeomShadowText,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list2(
parse = parse,
check_overlap = check_overlap,
na.rm = na.rm,
fun.geometry = fun.geometry,
...
)
)
}
I couldn't easily get the example in the original question to work due to API key issues, so here's a simpler working example:
library(ggplot2)
library(sf)
library(shadowtext)
library(rnaturalearth)
Africa <- ne_countries(continent = "Africa")
ggplot(data = Africa) +
geom_sf() +
geom_sf_shadowtext(mapping = aes(label = name_en))
I am writing this answer since I do not have enough reputation to write a comment yet.
I have found this post whilst having the same problem and tried to recreate my own problematic code, since this was asked for in the comments, so this is just what I think could be the problem, rather than a solution.
In my case, the problem is the display type.
The element containing the text will only stay as wide as the text itself when using display: inline.
But since using this is not always an option, I think what the original poster needs is a way to limit the width to the text with non-inline display attribute values and without using width: min-content.
<div style="width: 65px;background: black;">
<span style="display: block;background: gray;">Short Text</span>
</div>
The module path may need to be updated to include javafx.media
--add-modules javafx.media
To solve the issue follow these steps:
Create a table with a JSON format column. For example, a table named Calculation
with columns calculationNr
, date
, volume
, and calculation
.
Create a View using the following query to split the column containing JSON value and create separate fields as:
Create View SplitView As SELECT c.generalCal, c.position, c.counter,
JSON_VALUE(x.Value, '$. generalCal) as generalCal,
JSON_VALUE(x.Value, '$. position) as position,
JSON_VALUE(x.Value, '$. counter) as counter
FROM calculations c
CROSS APPLY OPENJSON(calcuation) as x
This query will create separate fields for generalCal
, position
, and counter
based on the JSON values in the calcuation
column.
Connect to SQL server and import the created view.
You will get three separate fields as you want in your given simplified table.
This guide will help you to do following sum.
SumOfValue | SumOfCounter1 | SumOfCounter2 |
---|---|---|
150 | 1000 | 800 |
40 | 25 | 88 |
Some other references:
In Visual Studio 2022, I don't see a way to directly start/stop profiling, but I do see a way to add "marks" to achieve the same thing: https://learn.microsoft.com/en-us/visualstudio/profiling/add-timeline-graph-user-marks
You can set marks in your code using the Microsoft.DiagnosticsHub namespace, and then once the data is collected, you can select the time between two collected marks to limit the profiling results to that time period.
track only the LLM process used by Ollama (e.g. Mistral) using psutil
. This gave me accurate CPU and RAM usage of just the language model, not for my whole whole system.
Finds a running process with name "ollama"
or "mistral"
Measures only its CPU + memory usage
Displays that alongside inference time
These steps work for me. tailwindcss v4.1.10 and angular 19.
https://tailwindcss.com/docs/installation/framework-guides/angular
I had regenerated the key store and triple checked the SHA1 and everything. Interestingly the Google One Tap showed up and allowed me to click the profile, and it would error out afterwards. When I used a SHA1 that was obviously invalid, the component errored out immediately.
I found a blog suggesting to use a 'Web' type OAUTH Client, instead of using the 'Android' one as suggested on most blogs and Claude. I left out the Url fields, and this worked!
TL;DR try use 'Web' Client instead of 'Android'
This gets the most recently created pod
:
kubectl get pods --sort-by=.metadata.creationTimestamp -o jsonpath="{.items[-1].metadata.name}"
Solution for IntelliJ 2025.1: Uncheck "Detect executable paths automatically."
Tags: docker-compose vs docker compose
Iface dthe same issue , does not work for me even after changing the file to logback-spring.xml.
pasting the error for your ref : 10:13:10,149 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@61:31 - no applicable action for [springProfile], current ElementPath is [[configuration][springProfile]]
10:13:10,149 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@62:29 - no applicable action for [root], current ElementPath is [[configuration][springProfile][root]]
10:13:10,149 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@63:46 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@64:57 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@68:32 - no applicable action for [springProfile], current ElementPath is [[configuration][springProfile]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@69:29 - no applicable action for [root], current ElementPath is [[configuration][springProfile][root]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@70:46 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
10:13:10,150 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@71:57 - no applicable action for [appender-ref], current ElementPath is [[configuration][springProfile][root][appender-ref]]
deployed in wsl -docker .
I also had the same issue but in my case the following did not resolve the error
implementation 'com.google.android.gms:play-services-safetynet:+'
then I checked my phone DNS settings which had a domain for AD blocks in phone then I turned off that DNS settings
Google Photos API - Deprecated
Photo Picker API - Only access to data created by that application - not all our pictures!
Takeout -Only way I think...(not pretty...)
I've been struggling with this error like forever. It must be some weird IntelliJ bug, because, at the beginning, when I went to File > Project structure > Platform settings > SDKs, it picked up the Oracle OpenJDK 21 that I had installed on my computer, but the Classpath tab was empty, it just said: nothing to show. And I had this error Kotlin: Cannot access 'java.io.Serializable' which is a supertype of 'kotlin.String'. Check your module classpath for missing or conflicting dependencies showing all the time. What I did was, from File > Project structure > Platform settings > SDKs, remove the JDK, then add it again from the directory where I had it installed, and then the Classpath tab showed some entries and the error went away.
In my case I replace the equal-width-constraint to width-constraint, and to set its constant value by calculating its reference, seems easier.
System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.EntityFrameworkCore.Metadata.Internal.ColumnBase`1.Microsoft.EntityFrameworkCore.Metadata.IColumnBase.get_ProviderValueComparer()
at Microsoft.EntityFrameworkCore.Migrations.Internal.MigrationsModelDiffer.Diff(IColumn source, IColumn target, DiffContext diffContext)+MoveNext()
at Microsoft.EntityFrameworkCore.Migrations.Internal.MigrationsModelDiffer.DiffCollection[T](IEnumerable`1 sources, IEnumerable`1 targets, DiffContext diffContext, Func`4 diff, Func`3 add, Func`3 remove, Func`4[] predicates)+MoveNext()
at System.Linq.Enumerable.ConcatIterator`1.MoveNext()
is this issue is already solve?
I got same issue when try to update to .net 6 to .net 8
Did you ever resolve this? Running into exactly the same issue
Build your Docker image and push it to a registry. Create Kubernetes Deployment and Service manifests to define how the container runs and is exposed. Use kubectl apply -f
to deploy them. Access the app via NodePort or LoadBalancer. You can automate the whole process using Jenkins pipelines.
If you're implementing role-based access in a MERN stack development project and want to designate yourself as the sole admin using userContext, a common pattern is to assign a default admin user manually in your seeding script or during user registration, then manage access logic in your middleware using JWT or context-based checks.
Here's a practical overview on how companies structure admin/user roles within mern stack web development applications — might offer additional clarity on structuring access and managing roles effectively:
🔗 https://techstory.in/future-ready-benefits-of-hiring-a-mern-stack-development-company
Would also recommend double-checking how userContext is passed through your protected routes. If you're using React Context on the frontend, make sure the server correctly validates and distinguishes roles based on the token or session data.
A Business Systems Analyst in a Data Warehouse application helps gather business needs, designs data models, and ensures accurate data for reports. They bridge business and tech teams, using tools like SQL and BI software. They also test and optimize data systems for better decisions. I recommend Datamites for training in these skills.
The question is quite old but still relevant and technology has changed. I am using WASM Web Tokens to secure my unauthenticated API. These are tokens generated in the browser using Webassembly and have shared secrets for the backend api to decrypt and verify. Webassembly being byte code is far harder to read than javascript
on:
workflow_run:
workflows:
- "CI + SonarQube Analysis"
types:
- completed
1st approach .Try to Give Simple name. might be name missmatching. for eg.CISonarQubeAnalysis 2nd appraoch : add type as completed as shown in ui
This is what Collectors are great for - you can collect data when the whole project is analysed and then evaluate them in a single rule invoked at the end of the analysis.
Learn more: https://phpstan.org/developing-extensions/collectors
Some great community packages are implemented thanks to Collectors, like https://github.com/shipmonk-rnd/dead-code-detector.
You're not alone in facing this 502 issue with AWS CloudFront + Google Cloud Run. This is a known pain point due to the subtle but critical differences in how CloudFront expects an origin to behave versus how Google Cloud Run serves responses.
Quick Summary of 502 Causes (Specific to CloudFront + Cloud Run)
CloudFront returns a 502 Bad Gateway when:
It can't understand the response from the origin (Cloud Run in this case)
There’s a TLS handshake failure, unexpected headers, timeout, or missing response headers
CloudFront gets a non-compliant response format (e.g., too long/short headers, malformed HTTP version)
Even though Cloud Run may respond with 200 OK directly, it does not guarantee compatibility with CloudFront's proxy behavior.
Likely Causes in Your Case
• Here are the most common and probable issues based on your setup:
• Cloud Run's HTTP/2 or Chunked Encoding Response
Problem: CloudFront expects HTTP/1.1 and may misinterpret Cloud Run's chunked encoding or HTTP/2 behavior.
Fix: Force Cloud Run to downgrade to HTTP/1.1 by putting a reverse proxy (like Cloud Run → Cloud Load Balancer or Cloud Functions → CloudFront) in between, or use a Cloud Armor policy with a backend service.
Missing Required Headers in Response
Problem: CloudFront expects certain headers (e.g., Content-Length, Date, Content-Type) to be present.
Fix: Log all outbound headers from Cloud Run and ensure the response is fully RFC-compliant. Use a middleware to enforce this.
Random Cold Starts or Latency in Cloud Run
Problem: Cloud Run can scale to zero, and cold starts cause delay. CloudFront times out quickly (~10 seconds default).
Fixes:
• Set min instances in Cloud Run to keep one container warm
• Optimize cold start time
• Increase CloudFront origin timeout (if using custom origin)
TLS Issues Between CloudFront and Cloud Run
Problem: CloudFront uses SNI-based TLS. If Cloud Run isn’t handling it as expected or certificate isn’t valid for SNI, 502 can result.
Fix:
• Use fully managed custom domains in Cloud Run with valid certs
• Check that your custom domain doesn’t redirect to HTTPS with bad certificate chain when coming from CloudFront.
Cloud Run Returns 404 or 500 Internally
Problem: If Cloud Run returns a 404/500, CloudFront may wrap this in a 502
Fix: Log actual responses from Cloud Run for all paths
Best Practice:
• Use a Layer Between CloudFront and Cloud Run
• Instead of connecting CloudFront directly to Cloud Run, use:
• Google Cloud Load Balancer (GCLB) with Cloud Run as backend
• Then point CloudFront to the GCLB IP or domain
This avoids a ton of these subtle issues and gives you more control (headers, TLS, routing).
Diagnostic Checklist
Item Status:
• Cloud Run always returns required headers (Content-Length, Content-Type, Date)
• Cloud Run has min instance (avoid cold starts)
• CloudFront origin protocol set to HTTPS only
• CloudFront timeout increased (origin read timeout = 30s or more)
• Cloud Run domain SSL cert supports SNI
• Logs from Cloud Run show successful 200s
• CloudFront logs show exact reason (check logs or enable logging to S3)
Community Reports
Many developers report intermittent 502s when using CloudFront + Cloud Run without a reverse proxy.
Some fixes:
• Moving to Google Cloud CDN instead of CloudFront
• Adding NGINX or Cloud Load Balancer in between
• Avoiding chunked responses and explicitly setting Content-Lengt
Suggested Immediate Actions
• Enable CloudFront logging to S3 to get more detail on the 502s
• Add a reverse proxy (NGINX or GCLB) between Cloud Run and CloudFront
• Force HTTP/1.1 response format from Cloud Run
• Set min_instances=1 to eliminate cold starts
• If nothing helps, consider using Google Cloud CDN for tighter integration with Cloud Run
If you want help debugging further:Please provide:Sample curl -v to Cloud Run endpoint
CloudFront response headers when 502 happens
Cloud Run logs during time of errorLet me know and I can walk you through fixing this definitively.
Check your NLog.config
to ensure the layout includes ${exception}
:
<target xsi:type="File" name="logfile" fileName="log.txt"
layout="${date} ${level} ${message}
${exception:format=ToString}
" />
You can achieve this by turning off the interactive option from the "Rating Bar Properties" section, and you can make use of decimal values such as 3.1
, 3.5
etc to control the star filling.
Without turning it off, it won't work
If you're building a dashboard in .NET 6 WinForms and looking for a modern, high-performance charting solution, you can try the Syncfusion WinForms Chart library.
It offers a wide variety of 45+ chart types including line, bar, pie, area, and financial charts.
Optimized for performance, it handles large datasets smoothly without lag.
Fully customizable with rich styling and interaction options like zooming, panning, tooltips, and annotations.
For more detailed information, refer to the following resources:
Demo: https://github.com/syncfusion/winforms-demos/tree/master/chart
Documentation: https://help.syncfusion.com/windowsforms/chart/getting-started
Syncfusion offers a free community license to individual developers and small businesses.
Note: I work for Syncfusion.