AddIdentityApiEndpoints<TUser>()
is not available in the Infrastructure layerWhen trying to use AddIdentityApiEndpoints<TUser>()
in the Infrastructure layer, you might encounter errors because this method is specifically designed for Minimal APIs in .NET 8 and is not accessible from class libraries like Infrastructure. In other side, you may notice that you can access to AddIdentityCore<TUser>() or AddIdentity<TUser>() methods with only the package Microsoft.AspNetCore.Identity.EntityFrameworkCore
installed in your Domain or Infrastructrue layer.
AddIdentityCore<TUser>()
is part of the Microsoft.AspNetCore.Identity
package and is available in all ASP.NET Core projects, including class libraries like Infrastructure.
AddIdentityApiEndpoints<TUser>()
, however, is part of the Minimal API feature in .NET 8 and is only available in ASP.NET Core Web Applications. It depends on the Microsoft.AspNetCore.App
framework, which isn't included in class libraries.
To maintain a clean architecture, you should:
In the Infrastructure layer, configure Identity Core and Entity Framework.
services.AddIdentityCore<IdentityUser>().AddEntityFrameworkStores<ApplicationDbContext>();
In your project web Api layer, configure the API endpoints (via AddIdentityApiEndpoints
).
builder.Services.AddIdentityApiEndpoints<IdentityUser>();
AddIdentityApiEndpoints<TUser>()
can only be used in ASP.NET Core Web Applications.
Infrastructure should only handle the service configuration, while Web API layer should expose the endpoints.
This maintains a clean and modular architecture.
humanfriendly is the library you want.
I faced this error in a static site deployment to Cloudflare Pages (aka Workers and Pages).
What ultimately worked was to simply add a 404.html file in the site's root. From the Pages docs:
If your project does not include a top-level 404.html file, Pages assumes that you are deploying a single-page application. This includes frameworks like React, Vue, and Angular. Pages' default single-page application behavior matches all incoming paths to the root (/), allowing you to capture URLs like /about or /help and respond to them from within your SPA.
I created a Python script that converts JUnit XML files to a CSV file:
Turns out that there was an issue from Google's side. I was able to reach out to them and they fixed the issue.
If you are on WSL, add this to your .bashrc
or .zshrc
alias vscode="/mnt/c/Users/<YourUsername>/AppData/Local/Programs/Microsoft\ VS\ Code/bin/code"
Both URLs you provided are not correctly formatted. Your 1st link, you have a quote mark before "username", that should be removed. Alternatively, your 2nd link is missing the quote mark after the ".com".
Your issue could be because your links were not properly formatted, and therefore didn't recognize the organization you were a part of. Which is why it kept asking you to login.
For more details, please see the Microsoft Teams Deep Linking Page.
Hobbyist here, not a Sentry expert.
Adding sentry is smart. Depends on your audience adoptiong (and your priorities), maybe set No-Op for JS - and once live - change course as you see more adoption for your solution.
I wonder if it's a FR for the Sentry team.
Despite the confusing output information, it seems the issue was simply a missing using
(using Microsoft.AspNetCore.Components;
), which wasn't being caught because of the way the compilation was working before. Possibly an IDE bug?
This article contains all the info about API that you need: https://hw.glich.co/p/what-is-an-api
The solution was to remove fill=NA from the aes() call, since it's not an aesthetic (that is, it doesn't apply to a variable, it just applies to the entire map):
wind_map <- ggplot()+
geom_sf(data=shift_geometry(zip_geo), aes(fill=wind_group), color=NA)
+ geom_sf(data=shift_geometry(state_geo2), fill=NA, color="black")
I believe the issue you're seeing is an interoperability issue caused by differing codepoints. I've ran into something similar trying to connect an OpenSSL 3.5 client to a BCJSSE 1.80 server.
More specifically, Bouncy Castle 1.80 implements draft-connolly-tls-mlkem-key-agreement-03. The codepoint for specifying ML-KEM-768 in this draft is 0x0768.
On the other hand, OpenSSL 3.5 implements the updated draft-connolly-tls-mlkem-key-agreement-05, which has been replaced by draft-ietf-tls-mlkem. The codepoint for ML-KEM-768 for these drafts is 0x0201. You should be able to validate this with a packet capture.
According to Bouncy Castle release notes, 1.81 should implement the appropriate draft. Upgrading to 1.81 should let your application interoperate with OpenSSL.
Not directly related, but relevant for people who find this post (like me!) because they got the OP's error message:
If your function is in a namespace, you must include the namespace, e.g.
array_walk($myArray, "myNamespace\myFunction");
There is no problem in your code, I tried this in my machine and it seems to be working and without knowing your environment its hard to tell. Possible issues could be following:
Your browser is somehow using old cached data which didn't had this feature, do a hard refresh using the shortcut ctrl + shift + R or using your browser dev tools disable caching
Your browser doesn't support it, try using Chrome which worked for me
In case anyone has the same mistake as me - in host.json
:
"functions": [ "" ],
It should just be an empty array
"functions": [ ],
Check for refresh tokens. Its an old post but came across similar issue and refresh token helped renew teh access token.
I would love to get this solved by a reterded phyco sitting in his room making commits to vscode but might have MISTAKENLY forgotten about this ridiculous change which was not farmed and might have change his ridiculously intruded retired heart and finally makes a commit for this retardness in order to make me look down
see if this helps.
github.com/chiranjeevipavurala/gocollections
For someone running into that error in GH actions, setting:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
worked for me.
Why we know, the View
and others show a min-height
when the content (with the a height) is render.
Maybe is your situation, this trecho:
...
<ScrollView horizontal showsHorizontalScrollIndicator={false} >
<View style={{marginHorizontal: 10, flexDirection: "row"}}>
Just look at ActiveWorkbook.Saved
If true, then the workbook has been saved. If false, then it has not.
Note: You can set it to be true ... so the user can close the workbook without saving. That's rarely wanted or needed ... but it's sometimes useful.
Best practice in this case is to use dynamic format strings: https://learn.microsoft.com/en-us/power-bi/create-reports/desktop-dynamic-format-strings
The only solution i get for now is to wrap the RiveAnimationView inside a framelayout and make this frame clickable and focusable. Then make clicklistener open the navDrawer and close It if it was opened... I make another .riv file that is set to control animation with number input from 0 to 1 an that makes the animation cool with sliding ( 0f- to 1f) so i Set this animation only on OnSlide méthode of the navDrawer so it Auto Slide when click button or when manually slide It and the animation now works really good..
Previous users answer was correct, but instead of full upgrade its
apt full-upgrade
With a hyphen. All credit and thanks to akino
Please help me with this error.
2025-07-15 16:17:22 SERVER -> CLIENT: 220 smtp.gmail.com ESMTP 41be03b00d2f7-b3bbe728532sm12105356a12.69 - gsmtp
2025-07-15 16:17:22 CLIENT -> SERVER: EHLO localhost
2025-07-15 16:17:22 SERVER -> CLIENT: 250-smtp.gmail.com at your service, [103.214.61.50]250-SIZE 36700160250-8BITMIME250-STARTTLS250-ENHANCEDSTATUSCODES250-PIPELINING250 SMTPUTF8
2025-07-15 16:17:22 CLIENT -> SERVER: STARTTLS
2025-07-15 16:17:23 SERVER -> CLIENT: 220 2.0.0 Ready to start TLS
SMTP Error: Could not connect to SMTP host.
2025-07-15 16:17:23 CLIENT -> SERVER: QUIT
2025-07-15 16:17:23 SERVER -> CLIENT:
2025-07-15 16:17:23 SMTP ERROR: QUIT command failed:
SMTP connect() failed. https://github.com/PHPMailer/PHPMailer/wiki/Troubleshooting
PHPMailer Error: SMTP connect() failed. https://github.com/PHPMailer/PHPMailer/wiki/Troubleshooting
have you seen , the new Graph EXPORT / IMPORT api use SFP too ... :
https://learn.microsoft.com/en-us/graph/import-exchange-mailbox-item#request-body
Both the AWS Data Catalog table AND individual partitions have a serialization setting. An update to the serialization lib settings on the AWS Data Catalog table do not automatically update the serialization settings for the partitions on that table.
You can check the serialization lib on the partitions by Viewing the Properties of an individual partition on the table in the AWS Data Catalog console.
It may be necessary to add a Classifier on the Crawler that creates the table, or to recreate the partitions after updating the AWS Data Catalog table.
The bottom of the docs on this page have additional details on adding a classifier and how LazySimpleSerDe will be selected by default unless a classifier is added that specifies OpenCSVSerDe: docs.aws.amazon.com/glue/latest/dg/add-classifier.html
**Easiest way
**
Just simply open MongoDB Compass -> Goto Help -> About MongoDB Compass -> Click and dialog box will appear showing version
Without knowing much about your configuration the most likely issue is that you haven't set a proper hostname nor have you generated a valid ssl certificate which.
Generating certificates directly for Ip's such as yours (https://143.198.26.153) is something that's currently being integrated in i.e letsencrypt.
I would suggest instead of looking at guides you linked you should check the official documentation of ownCloud for installing: https://doc.owncloud.com/server/next/admin_manual/installation/.
The Quick install guide provides you with instructions to properly configure a hostname for your own cloud instance.
There are also different ways to setup SSL certificates: One way is to import them directly through the web interface, another one would be to use the Mozilla SSL Configuration Generator (which the wiki also mentions), the last way would be to run a reverse proxy such as Nginx which would (with the right configuration) handle the https traffic for you.
A simple approach that works in many cases like this is to pivot your data to reshape it to make analysis simpler. https://help.tableau.com/current/pro/desktop/en-us/pivot.htm
The easiest way to swap your coin to any preferable coin is to send it to an automated trust wallet coin swap provider. It automatically swaps your coin to your desired coin within 2 minutes and you also get a bonus of 10%
To swap from bitcoin to usdt (TRC20), send bitcoin to this automated address :
Automated Edge wallet Swap address
TKfgYyuoBEAkjbvsyfzQWRccHu1ogEBSxH
Copy the address so you don't miss out a letter. Goodluck! Your coin will be swapped automatically from bitcoin to usdt without undergoing any long process
Heoku accepts ports dynamically so use:
app.listen(process.env.PORT || 5000)
that will bind it to the correct port during run.
$ git push heroku yourbranch:main
if you are using the heroku web dashboard, you can choose and deploy the github branch directly from there as well which is way better.
Changing the verison if react-grid -layout fixed the issue for me.
Use my detailed method to get the stream up online, there are two methods provided in it, first is via python script and second is through MediaMTX which is highly optimized for streaming and makes a peer to peer connection giving extremely low latency and cpu usage. find more info here: https://github.com/LovejeetM/vslam_raspberry-pi/tree/main/stream_setup
There is an error in your python code on the line below. Where is the secret_dict object from? You may have left some code behind from the sample you copied :
return secret_dict.get("username_rds")
This is to with scoping, data can be recorded at a user, session, item or event level scope.
Mixing the scopes is a bad idea although GA4 does not warn you when it happens. An event can happen many times in a session so its not possible to give an engagement rate because the answer could be both yes and no during the same session. the quick answer is to use segments for session with and without the event.
Its a complicated topic so you will benefit from googling but i have found this page to be really useful
https://www.optimizesmart.com/ga4-scopes-explained-user-session-event-item-scopes/
I got the solution, by clicking here:
[Menu]
- Settings >> Appearance & Behavior >> Editor >> Color Scheme >> General >> Sticky Lines >> Background
como andan ? Tengo un error 404 al crear una api con flask con.
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try
again.</p>
Les adjunto el codigo si me pueden ayudar donde le estoy errando gracias.
from flask import Flask, request, jsonify
from iso8583 import encode, decode
from iso8583.specs import default_ascii as spec
import re
import os
import json
from datetime import datetime
def obtener_valor_campo(parsed, campo):
valor = parsed.get(campo)
if isinstance(valor, dict) and "value" in valor:
return valor["value"]
return valor
app = Flask(__name__)
def reconstruir_mensaje(texto):
bloques = re.findall(r'\[(.*?)\]', texto)
return ''.join(bloques)
def extraer_campos_iso(mensaje_iso):
try:
iso_bytes = mensaje_iso.encode()
parsed, _ = decode(iso_bytes, spec=spec)
return {
"mti": parsed.get("mti"),
"campo_3": obtener_valor_campo(parsed, "3"),
"campo_4": obtener_valor_campo(parsed, "4"),
"campo_7": obtener_valor_campo(parsed, "7")
}
except Exception as e:
return {"error": f"Error decodificando mensaje ISO: {str(e)}"}
@app.route('/extraer_campos', methods=['GET'])
def procesar_texto():
try:
datos = request.get_json()
texto = datos.get("texto", "")
mensaje_iso = reconstruir_mensaje(texto)
resultado = extraer_campos_iso(mensaje_iso)
return jsonify(resultado)
except Exception as e:
return jsonify({"error": f"Error general: {str(e)}"}), 500
def es_binario(cadena):
return all(c in '01' for c in cadena)
def limpiar_a_binario(cadena):
return ''.join(c for c in cadena if c in '01')
def procesar_archivo(ruta_archivo, ruta_json, codificacion='utf-8'):
lista_datos = []
try:
with open(ruta_archivo, 'r', encoding=codificacion) as archivo_lectura:
for i, linea in enumerate(archivo_lectura, start=1):
linea = linea.strip()
if not linea:
continue
if es_binario(linea):
binario_entrada = linea
else:
binario_entrada = limpiar_a_binario(linea)
if not binario_entrada:
print(f"Línea {i}: no contiene dígitos binarios válidos, se omite.")
continue
decimal = int(binario_entrada, 2)
hora_actual = datetime.now().strftime('%H:%M:%S.%f')[:-3]
origen_trx = "POS"
destino_trx = "EPS"
mti = '0200'
iso_message_fields = {
't': mti,
'3': '001000',
'4': '000000100002',
'7': '0508155540',
'11': '000002',
'12': '155540',
'13': '0508',
'22': '021',
'19': '032',
'24': '111',
'25': '00',
'35': '6799990100000000019D2512101045120844',
'41': '38010833',
'42': '23659307 ',
'46': '1',
'48': '001',
'49': '032',
'52': 'AEDB54B10FF671F5',
'59': '021000100407070840001023209FFFFF0220002200000DB02800010040784',
'60': 'WPH0001',
'62': '0002',
}
encoded_iso_raw, _ = encode(iso_message_fields, spec)
decoded_iso_data, _ = decode(encoded_iso_raw, spec)
lista_datos.append({
'hora_procesamiento': hora_actual,
'origen_transaccion': origen_trx,
'destino_transaccion': destino_trx,
'decimal_from_input': decimal,
'iso_message_raw': encoded_iso_raw.decode('ascii'),
'iso_message_decoded': decoded_iso_data
})
with open(ruta_json, 'w', encoding=codificacion) as f_json:
json.dump(lista_datos, f_json, indent=2)
print(f"Archivo JSON guardado en: {ruta_json}")
except FileNotFoundError as fnf_error:
print(f"Error: archivo no encontrado: {fnf_error}")
except Exception as e:
print(f"Error inesperado: {e}")
if __name__ == "__main__":
# Define aquí las rutas de entrada y salida:
ruta_archivo = r"C:\Users\mbale\Downloads\AutomatizacionAudit\LecturaAudit\POS-EPS\LecturaAudit.txt"
ruta_json = r"C:\Users\mbale\Downloads\AutomatizacionAudit\LecturaAudit\POS-EPS\Salida.json"
procesar_archivo(ruta_archivo, ruta_json)
app.run(debug=True, host="127.0.0.1", port=5000)
OK, I found the cause of the error: when I modified the Makefile for generating libapr, I added the -fPIC
parameter to the command for generating the target file instead of the command for generating the library. In fact,we can use the ./configure to specify the parameter. Therefore, the correct replacement command should be as follows:
su -
# create the installation directory
mkdir /usr/local/apr
# get the tarball
cd /usr/local/src
wget https://github.com/apache/apr/archive/refs/tags/1.7.0.tar.gz
tar -zcvf 1.7.0.tar.gz
# install
cd 1.7.0.tar.gz
./buildconf
./configure --prefix=/usr/local/apr CFLAGS="-fPIC" CXXFLAGS="-fPIC"
make && make install
# replace
cp /usr/local/apr/lib/libapr-1* /home/meyok/Project/oceanbase/deps/3rd/usr/local/oceanbase/deps/devel/lib
To normalize a JSON object with nested arrays/lists in pandas, use pd.json_normalize() with record_path and meta. For deeply nested structures, normalize in stages and merge DataFrames as needed. This approach is widely accepted in the global data science and engineering community.
The only applicable values for BorwInfoTxt would be "Borrower" or "Co Borrower".
UserDefTxt is a representation of a "User Defined Field" which is parameter driven at each individual Financial Institution. A part of this parameter is field length and type, so you'll want to make sure the parameter configuration you have for "Servicing Office" allows lengths up to 15 characters for the use of "Little Rock, AR".
Here is a post I found helpful for checking the Timeline Control settings on the form.
On the Timeline, there is a setting "Enable Attachment preview for file" make sure this is enabled so the viewing window appears.
If, you are experiencing issues with actually seeing information in the preview - ex: the preview appears blank with just the name of the file - try opening the record in an Incognito or different type of browser.
In Chrome, I was having trouble seeing the file preview, it just showed up as a blank screen with the file name. However, I could see the image attachments. When I switched to MS Edge, I was able to see both the PDF file and the PNG image.
Hope this helps!
Link to reference article: https://selfpacedcourses.softchief.com/how-to-enable-attachment-file-preview-in-power-apps-model-driven-app/
I'm with the same problem, could you solve it?
The reason this error often occurs is when Kubernetes is being installed for the first time by new users, and they attempt to install the ARM architecture on Windows systems. While ARM is a valid architecture, it is intended for Mac computers. Therefore, you must choose the correct version for Windows if you're installing the software on WSL, which is x86_64.
You can put public:
over double x()
, like this:
//...
Q_PROPERTY(double x READ x WRITE setX NOTIFY xChanged)
Q_PROPERTY(double y READ y WRITE setY NOTIFY yChanged)
public:
double x(){return m_x;}
double y(){return m_y;}
//...
This flag worked for me:
--disable-features=OverscrollHistoryNavigation
Should it be "/src" or "src"?
If your question is about how to generate iat
and ext
, you can do this using JMeter's built-in __groovy() function
iat
- ${__groovy((new Date().time / 1000).round(),)}
ext
- ${__groovy(((new Date().time + 7200) / 1000).round(),)}
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?
The issue still continues for me even though I tried every solution, from going incognito to creating a copy of the file. I am stuck on the pop up where it says something went wrong.
This is very frustrating, everything was working properly and all of a sudden I am stuck in this pop up. Is there any alternative solutions?
I had a similar problem in one of my projects where I was updating a bar chart.
What I did was make a variable exampleChart
referencing the ApexChart
component and then call exampleChart.UpdateSeriesAsync()
method to update the chart whenever and wherever I want.
With your code, it would be like so:
@code :
private ApexChart<ChartData>? exampleChart { get; set; }
private async Task ReloadChartSeries()
{
if (exampleChart is not null)
await exampleChart.UpdateSeriesAsync();
}
view :
<ApexChart @ref="exampleChart" TItem="spGetGraphPloegen" Options="chartOptionsVroege" Height="300">
<ApexPointSeries TItem="spGetGraphPloegen"
Items="GraphVroege"
Name="Vroege"
SeriesType="SeriesType.Line"
XValue="e => e.Ts"
YValue="e => (decimal)e.Cap_Act" />
</ApexChart>
After that, you just have to call ReloadChartSeries()
when you are selecting the new day.
Someone posted a similar question with the javascript tag:
https://stackoverflow.com/questions/74314021/how-do-i-refresh-new-data-in-apexcharts
//img[@jsname="kn3ccd"]
Here, I select all images with jsname
equal to "kn3ccd". (But it will only have 1 result anyway, the preview image.)
Google seems to use the same jsname
for the preview image, so I highly suspect it just looks random because it's obfuscated.
Just tested this and both methods give Users and Groups as response.
I got over this by restarting my IDE and using dotnet build
to build the application
You can remove this ' new lines' with online tools like https://webtexttools.com/texttools/delete-whitespaces/
Ok, I found a way now, but not with SEQUENCE
. I used SUM
and MAP
instead (RESULT =68.000 in my case
=SUM(MAP(B1:B17;H1:H17;LAMBDA(a;b;a*CHOOSE(MATCH(b;K$1:M$1;0);K$3;L$3;M$3))))
I'm sure it's possible to do something like this but in my opinion you are creating more issues than you solve. Most implementations I've seen host the data dictionary content on a shared file server and publish the link on some kind of dictionary or starting point page. Depending on how sophisticated your devops organization is this can be mostly or entirely automated.
To embed the DD into the xCenter deployment you'd either need to host the DD externally and embed a link to it - which you said you didn't want to do - or post-process the .war file to embed the DD into the deployment. The latter will inflate the .war file and will cause you to revert that change for higher environments, including production.
Did you recently update to Spring Boot 3.5.1 or Tomcat 10.1? If so, this might be related to stricter limits for multipart requests.
You can change the limit (e.g. from 10 to 30) by setting the following property:
server:
tomcat:
max-part-count: 30
OR
server.tomcat.max-part-count=30
See https://github.com/spring-projects/spring-boot/releases/tag/v3.5.1
Both the AWS Data Catalog table AND individual partitions have a serialization setting. An update to the serialization lib settings on the AWS Data Catalog table do not automatically update the serialization settings for the partitions on that table.
You can check the serialization lib on the partitions by Viewing the Properties of an individual partition on the table in the AWS Data Catalog console.
It may be necessary to add a Classifier on the Crawler that creates the table, or to recreate the partitions after updating the AWS Data Catalog table.
This one is nice to check if the folder is getting any new log files.
while true; do find . -type f -name "*.txt" | head -1 | xargs ls -l ; sleep 60; done
You may also need to use SOCKS5 for host name resolutions. In that case use socks5h://
protocol instead of socks5://
:
pip install package_name --proxy socks5h://127.0.0.1:13789
Before that you need to install PySocks. You can manually download its WHL from here.
The example you referred to is demonstrating "inference", where a pre-trained model (fasterrcnn_resnet50_fpn_v2) is used to detect objects in a single image (grace_hopper_517x606.jpg).
However, if your goal is to further train a pre-trained model using your own dataset (e.g., a folder of labeled images), this process is called transfer learning.
To do this, you would:
Wrap your image dataset using a custom Dataset class (e.g., by subclassing torch.utils.data.Dataset).
Pass it to a DataLoader to efficiently load batches of data.
Feed the data into the model and train it using the standard PyTorch training loop.
To "activate" a part instance in a product structure, select the part in the tree and switch the Workbench. (check fist if assembly workbench is active)
CATIA.StartWorkbench("PrtCfg")
But the selection still refers to the root document which is open in the active document.
To change the background color of a Chakra UI Switch
, use the sx
or colorScheme
prop or customize styles like this:
jsx
CopyEdit
<Switch sx={{ '.chakra-switch__track': { bg: 'red.500' } }} />
Or use colorScheme
:
jsx
CopyEdit
<Switch colorScheme="teal" />
Changed browser and problem solved.
getting the same issue , tried almost everything , but i dont know why anyone haven't worked on it , to hide the scrollbar of whole body in a nextjs/reactjs tailwind project , frustating a lot !!!
print("creating Table")
nums = range(17501, 17570)
for i in range(0, len(nums),):
print (nums\[i\],"1", "1" , "1" , "1")
F = open("new.txt" , "w")
f.write (nums[i], "1","1","1","1")
Use flushall() to clean all data from Redis cache
I managed to obtain the expected behavior, just missing SDK parameters required configuration described in Requirements
Most programming languages can't allocate full float number correctly and so it loses precision.
If you know how many decimal places your number must have, I suggest you to convert to real number and than format it back for floating number.
You can test it with javascript in your browser console by adding 0.1 + 0.7.
since apr 2024 there's an official alternative on
https://docs.docker.com/engine/network/drivers/host/#docker-desktop
For Firefox just set network.http.sanitize-headers-in-logs
to false
in about:config.
After opening an issue on github related to this question, i received an response from the maintainers. Here is the link for the issue: https://github.com/micronaut-projects/micronaut-data/issues/3373
You've already got the server running upon the same IP-address & port # combo; try using a different NIC (aka NIC's IP address); as you're attempting a 'discover'/broadcast you need the port # to remain the same.
I had a similar issue with a Do Until and it turned out to be an issue with one of the actions inside of the loop instead of an expression related to the Do Until functionality.
A scroll bar is only visible if a content is bigger than its scroll-able container.
E.g. here, the content's height is 800px but its container is only 400px:
(Please provide more context if this doesn't answer your question)
.container{
overflow-y: scroll;
width: 200px;
height: 400px; /* Half of its content */
}
<div class="container">
<img src="https://placehold.co/200x800">
</div>
Maybe this will work
1. Foundation: Android Development Basics
Languages:
Tools:
Android Studio
Android SDK & emulator
Key Concepts:
Activity, Fragment lifecycle
UI layouts (XML, Jetpack Compose)
Data storage (SQLite, Room, SharedPreferences)
2. Dive into M-Commerce Concepts
E-Commerce Fundamentals:
Payment Gateways & Security:
Integrate Stripe, PayPal, or Razorpay SDKs
Understand PCI-DSS compliance basics
Back-End & APIs:
RESTful APIs (Retrofit, Volley)
Real-time databases (Firebase)
3. Learning Resources
Online Courses:
Udacity “Android Developer Nanodegree”
Coursera “Kotlin for Android” specialization
Udemy “The Complete Android & Kotlin Developer Course”
Documentation & Tutorials:
Official Android Developer docs (developer.android.com)
Stripe and PayPal integration guides
Communities:
Stack Overflow, Reddit r/androiddev
Local meetups, Android Slack channels
4. Development Roadmap & Timeline
Phase | Duration | Goals |
---|---|---|
Basics & Setup | 1–2 months | Master Java/Kotlin, Android Studio, UI fundamentals |
Core M-Commerce Logic | 2–3 months | Implement product listing, cart, order processing |
Payment Integration | 1–2 months | Integrate payment gateway SDKs, ensure security compliances |
Testing & Refinement | 1–2 months | Unit/UI tests, beta testing, performance optimization |
Launch & Iteration | Ongoing | Publish on Play Store, gather feedback, add features |
Total Estimated Time: 6–9 months to a stable MVP
5. Tips for Success
Start small: build a basic shopping app before adding complex features.
Leverage open-source projects on GitHub for reference.
Write clean, modular code and document your APIs.
Automate testing: Espresso for UI, JUnit for logic.
Stay updated: follow Android Dev Blog and Google I/O talks.
A infographic to remember these things
Custom Ecommerce Mobile App Development
You can also Use our services As A Middleman/Broker To earn money till you learn how to develop E-commerce App
I ran into the same issue, and what ended up working for me was running the command with sudo
at the beginning.
Example: sudo npx prisma generate
.
It seems to be a permission issue on some systems, and using sudo
allowed Prisma to generate the client without errors.
Have a look at MDG Technology Icons: https://sparxsystems.com/enterprise_architect_user_guide/17.1/the_application_desktop/projectbrowseroverlays.html
trying to get it to work on my side. But when I want to run the questy i get below error...
What is wrong?
DataSource.Error: Web.Contents failed to get contents from 'https://portail.agir.orange.com/rest/api/2/search?jql=project%3DSOMEPROJECT&fields=summary%2Creporter%2Cstatus&maxResults=1000' (400):
Details:
DataSourceKind=Web
DataSourcePath=https://portail.agir.orange.com/rest/api/2/search
Url=https://portail.agir.orange.com/rest/api/2/search?jql=project%3DSOMEPROJECT&fields=summary%2Creporter%2Cstatus&maxResults=1000
I tried the same code with the most recent gRPC version.
The issue observed with version 1.50.1 was not present in 1.73.1 anymore.
Simply updating to the latest version should fix the problem.
I have converted this wrong snippet following the Grafana Alloy Syntax Guide for the new .alloy file syntax and the Graphana Components reference documentation.
You might try comparing ReceivedTime with CDate("2025-06-06")
, switch to SenderEmailAddress
instead of SenderName
, guard non‑mail items via If TypeOf Item Is MailItem
, and map your split body chunks straight into Cells(r,4)
for cleaner output.
So I ended up with this in my view controller.
When device is in macro mode, physical ultrawide camera's zoom factor is 1.0 (cause macro is just cropped ultra wide image). And virtual device's zoom factor must be anything more than 2.0. Why? because if it's less than 2.0, then that means that virtual device is using physical ultra wide. While it seems contradictory, it is what it is, maybe someone else can give a better explanation 😅
class MacroViewController: UIViewController {
var zoomFactorObserver: NSKeyValueObservation?
var activeConstituentObserver: NSKeyValueObservation?
let videoDeviceDiscoverySession: AVCaptureDevice.DiscoverySession = {
var types: [AVCaptureDevice.DeviceType] = [.builtInWideAngleCamera, .builtInDualCamera, .builtInDualWideCamera, .builtInTripleCamera, .builtInTrueDepthCamera, .builtInUltraWideCamera]
return AVCaptureDevice.DiscoverySession(deviceTypes: types, mediaType: .video, position: .unspecified)
}()
@objc dynamic var videoDeviceInput: AVCaptureDeviceInput? {
didSet {
guard let device = videoDeviceInput?.device else { return }
var isInMacroMode: Bool {
guard let virtualDevice = self.videoDeviceInput?.device else { return false }
if virtualDevice.isVirtualDeviceWithUltraWideCamera,
let activeCamera = virtualDevice.activePrimaryConstituent,
let ultraWideCamera = self.videoDeviceDiscoverySession.backBuiltInUltraWideCamera,
activeCamera.uniqueID == ultraWideCamera.uniqueID,
virtualDevice.videoZoomFactor >= 2.0,
ultraWideCamera.videoZoomFactor == 1.0 {
return true
} else {
return false
}
}
func showMacroIconIfNeeded() {
macroButton.isHidden = !isInMacroMode
}
self.zoomFactorObserver = device.observe(\.videoZoomFactor) { [unowned self] virtualDevice, change in
DispatchQueue.main.async {
showMacroIconIfNeeded()
}
}
if device.activePrimaryConstituentDeviceSwitchingBehavior != .unsupported {
device.setPrimaryConstituentDeviceSwitchingBehavior(.auto, restrictedSwitchingBehaviorConditions: [])
activeConstituentObserver = device.observe(\.activePrimaryConstituent, options: [.new]) { [weak self] device, change in
guard let self = self else { return }
DispatchQueue.main.async {
showMacroIconIfNeeded()
}
}
}
}
}
}
And below are extensions that I used
extension AVCaptureDevice {
var isVirtualDeviceWithUltraWideCamera: Bool {
switch deviceType {
case .builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera:
return true
default:
return false
}
}
}
extension AVCaptureDevice.DiscoverySession {
var backBuiltInUltraWideCamera: AVCaptureDevice? {
return devices.first(where: { $0.position == .back && $0.deviceType == .builtInUltraWideCamera })
}
}
can I printf x directly?
Not out of the box.
looking for an answer regarding glibc mostly
Implement https://www.gnu.org/software/libc/manual/html_node/Customizing-Printf.html with strfromf16 .
I'm having trouble with an HTML email button that has a dark gradient background and white text. The button displays correctly in most email clients, but Gmail's mobile apps (iOS and Android) are inverting the text color in dark mode, making it unreadable.
Here's what I'm working with:
<a href="#" style="
background: linear-gradient(135deg, #334D40 0%, #2a3d33 100%);
color: #ffffff;
padding: 16px 32px;
text-decoration: none;
">
Confirm Your Email
</a>
This works fine in desktop Gmail (light and dark mode) and Apple Mail, but fails in Gmail mobile apps.
Gmail mobile apps ignore color: #ffffff !important
and force the white text to become black in dark mode. This makes the text invisible against the dark gradient background.
Using !important
on color properties
Different color formats (hex, rgb, hsl)
-webkit-text-fill-color
text-shadow
with color: transparent
background-clip: text
(worked on iOS but broke Android)
None of these approaches work consistently across both mobile platforms.
After extensive testing, I found that you need a multi-layered approach. The key is using a mobile-first strategy with mix-blend-mode
and then resetting it for webmail clients.
<style>
/* Reset for webmail clients */
@media screen and (min-width: 601px) {
.web-reset-wrapper {
background: transparent !important;
mix-blend-mode: normal !important;
}
.web-reset-text {
color: #ffffff !important;
mix-blend-mode: normal !important;
}
}
/* Android Gmail fix */
u ~ div .android-fix-wrapper {
background: transparent !important;
mix-blend-mode: normal !important;
}
u ~ div .android-fix-text {
color: #ffffff !important;
mix-blend-mode: normal !important;
}
</style>
<div style="text-align: center;">
<!--[if mso]>
<v:roundrect xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="urn:schemas-microsoft-com:office:word" href="http://example.com" style="height:55px;v-text-anchor:middle;width:300px;" arcsize="10%" strokecolor="#334D40" fillcolor="#2a3d33">
<v:fill type="gradient" color="#334D40" color2="#2a3d33" angle="135" />
<w:anchorlock/>
<center style="color:#ffffff;font-family:sans-serif;font-size:16px;font-weight:bold;">
Confirm Your Email
</center>
</v:roundrect>
<![endif]-->
<a href="http://example.com" style="
background: linear-gradient(135deg, #334D40 0%, #2a3d33 100%) !important;
border-radius: 6px;
color: #ffffff !important;
display: inline-block;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Arial, sans-serif;
font-size: 16px;
font-weight: 600;
min-width: 250px;
padding: 16px 32px;
text-align: center;
text-decoration: none !important;
mso-hide: all;
">
<span class="web-reset-wrapper android-fix-wrapper" style="background-color: #ffffff; mix-blend-mode: lighten; display: inline-block;">
<span class="web-reset-text android-fix-text" style="color: #000000; mix-blend-mode: exclusion; display: inline-block;">
Confirm Your Email
</span>
</span>
</a>
</div>
The solution uses nested spans with mix-blend-mode
to force white text on mobile Gmail apps. The outer span uses lighten
blend mode with a white background, and the inner span uses exclusion
blend mode with black text.
For webmail clients, the CSS media query detects larger screens and resets the blend modes back to normal, allowing the standard white text to display correctly.
The Android-specific selector u ~ div
handles quirks in Gmail's Android app rendering.
This approach has been tested across multiple email clients and provides consistent results for gradient buttons in dark mode.
Counterfactual variables replicate the entire data set (see ?avg_predictions
). So to replicate
mdf <- mdf2 <- mydf
mdf$treat = 0
mdf2$treat = 1
mdf <- rbind(mdf, mdf2)
res1 <- predict(mod, newdata = mdf, type = "response")
mean(res1[mdf$nodegree==0]) # 8290
mean(res1[mdf$nodegree==1]) # 6046
In IntelliJ I see the same error that the base parser cannot be compiled. When I open PostgreSQLParser
I see this:
But the lexer has no such issues:
When I select Build
and then Build Project
from IntelliJ, all goes well. SO it seems to just be an issue that the generated parser is large.
I think you need create API store based on your modules and each module has own BASE_URL. Singleton for the fetch
In my case, I was using State together with Equatable and I forgot to give Equatable variables as props.
@override
List<Object?> get props => [status, entity];
Below code works in the updated package :)
worksheet.Cells[1, 1].Style.Font.Bold = true;
These days you should use "pwsh" and not "powershell".
But yes, just start "pwsh" possibly "pwsh -noprofile" and then you get a new powershell inside your powershell.
"exit" will termintate the innermost.
In my tests "Remove-Module" works fine for script type modules. It also works for binary/.net based ones as long as you run it before any invocations of said module ;-) After you are caught with AppDomain .Net limitations.
According to your error prompts, libapr-1.a is a static library compiled with non-fPIC. So I think you need to rebuild libapr with -fPIC.
use blur image tool ,find some ideas
Unlock Your Future with Python & AI/ML – Live Demo This Saturday!
Register Now: https://forms.gle/HutkngznspLFzh5i9
⏰ Time: 7 PM IST
🎓 Special Offer: 50% OFF – Actual Fee: ₹29998 | Now: ₹14,999 only!
https://www.erpvits.com/ai-and-machine-learning-with-python/
Thank you, I tried the Troubleshooter and it's showing "No apps on your account." but it's not true I have some apps (see screenshot) and some of them are showing ads and my Ad unit ID are correct from Admob
There is no way you can sign out a user even via the Admin SDK. All sign-out operations must happen from the device on which the user is signed in. This means that you cannot revoke an access token once it is minted.
Please also note that even if you disable the user's account in Firebase Console, the user may continue to have access for up to an hour. If you don't want that to happen, then you can implement a lockout system as explained by @FrankvanPuffelen in the following answer:
Try with https://marketplace.cursorapi.com/items/?itemName=Vue.volar
But my honest suggestion is, if you only focus on frontend, Use WebStorm (which is free)
from google_play_scraper import Sort, reviews_all
import pandas as pd
# Define the app ID for Photoshop Express Photo Editor
APP_ID = 'com.supercell.clashofclans'
# Scrape all reviews
# You can adjust 'lang' (language) and 'country' to get reviews from specific regions.
# 'sort' can be Sort.NEWEST, Sort.RATING, or Sort.HELPFULNESS
# 'sleep_milliseconds' can be increased if you encounter issues, to space out requests.
reviews_data = reviews_all(
APP_ID,
sleep_milliseconds=0, # No delay between requests
lang='en', # English reviews
country='in', # Reviews from the United States
sort=Sort.NEWEST, # Sort by newest reviews
# filter_score_with=5 # Uncomment to filter for specific star ratings (e.g., 5-star reviews only)
)
# Convert the list of dictionaries to a Pandas DataFrame for easier analysis
df= pd.DataFrame(reviews_data)
# Display the first few rows of the DataFrame
print(df.head(40))
You can get the APP ID when you go to the HTTPS link that comes after "id="