Since MQTT v5 was released the year after this question was posted, I'd suggest putitng the sensor ID in the User Properties map of each message. That seems a better place for it as an identifier vs. the topic or the payload. Yes, it will increase the message size, but no more (not much more?) than having it in the topic or payload.
To reduce the size of thump use "RoundedSliderThumpShape"
solution resource = https://api.flutter.dev/flutter/material/SliderThemeData/rangeThumbShape.html
SliderTheme(
data: SliderTheme.of(context).copyWith(
thumbShape: RoundSliderThumbShape(enabledThumbRadius: 4),
// to increase or reduce size use
rangeThumbShape: RoundRangeSliderThumbShape(enabledThumbRadius: 8)
),
child: RangeSlider(
values: RangeValues(0, 20),
min: 0,
max: 100,
onChanged: (v) {},
),
)
How can i change HttpClient Implementation in .csproj? @Anand
Found it. Apparently there is a lookup function. This works perfectly in my templates:
{{- range .Values.onepass.items }}
{{- if not (lookup "onepassword.com/v1" "OnePasswordItem" .Release.Namespace .name ) -}}
apiVersion: onepassword.com/v1
kind: OnePasswordItem
metadata:
name: {{ .name }}
annotations:
operator.1password.io/auto-restart: {{ .autorestart | default true | quote }}
spec:
itemPath: {{ .path}}
---
{{- end }}
{{- end }}
Opentofu wants another provider before the dynamic provider. So changing the code to
provider "aws" {
alias = "by_region"
region = each.value
for_each = toset(var.region_list)
}
provider "aws" {
region = "us-east-1"
}
variable "region_list" {
type = list(string)
default = ["us-east-1", "us-east-2", "us-west-1", "us-west-2"]
}
will fix the error
inclui este parâmetro que esta na documentação e funcionou:
<p style="text-align: left;">Este texto está alineado a la izquierda.</p>
<p style="text-align: center;">Este texto está centrado.</p>
<p style="text-align: right;">Este texto está alineado a la derecha.</p>
<p style="text-align: justify;">
Este texto está justificado, lo que significa que se alinea uniformemente en ambos márgenes, mejorando la presentación en textos largos.
</p>
When using Flask-Smorest, you can disable the automatic documentation of default error responses across all endpoints using this configuration pattern example:
def setup_api(app: Flask, version: str = "v1"):
# init API
_api = Api(
spec_kwargs={
"title": f"{app.config['API_TITLE']} {version}",
"version": f"{version}.0",
"openapi_version": app.config["OPENAPI_VERSION"],
},
config_prefix=version.upper(),
)
_api.DEFAULT_ERROR_RESPONSE_NAME = None # Key parameter to disable default errors
_api.init_app(app)
# register blueprints
register_blueprints(_api, version)
Try out Modernblocks, i think you might like it!! https://moblstudio.vercel.app/
I think I found what I want. I need to do the following:
import asyncio
async def long_task():
try:
async with asyncio.timeout(5):
print("Long task started")
await asyncio.sleep(10) # Simulate a long operation
return "Long task completed"
except asyncio.TimeoutError:
# Do something when you reached timout
except asyncio.CancelledError:
print("Long task cancelled")
raise
async def main():
background_task = asyncio.create_task(long_task())
# Do useful work
asyncio.run(main())
I'm having a similar issue with connecting to my supabase project using the IOS simulator.
I wonder if it is because Apple ATS sees a URL that ends in "supabase.co" and doesn't like it.
If I update my infoPlist like this, it works. I don't know if this is a good long term solution though.
"ios": {
"supportsTablet": true,
"bundleIdentifier": "com.xxxxxxx.expoapp",
"infoPlist": {
"ITSAppUsesNonExemptEncryption": false,
"NSAppTransportSecurity": {
"NSAllowsArbitraryLoads": true,
"NSExceptionDomains": {
"supabase.co": {
"NSExceptionAllowsInsecureHTTPLoads": true,
"NSIncludesSubdomains": true
}
}
}
}
},
It is recommended to use AuthOptions instead of NextAuthOptions for better clarity and consistency. You can import it directly using:
import { AuthOptions } from 'next-auth';
This helps align your code with the latest conventions in the NextAuth.js documentation.
export const authConfig: AuthOptions = {}
I believe you can set the specific warning codes
https://learn.microsoft.com/en-us/visualstudio/msbuild/msbuild-command-line-reference?view=vs-2022
Below the settings/Code and automation/Actions you can select which action will be allow.
Your template does not match the actual S3 layout. Omit the `<col_name>=` and it should work. Also, if the projection is enabled Athena will not consider manually added partitions.
'storage.location.template'='s3://mybucket/productA/${Model}/${Year}/${Month}/${Date}/'
I’m having the same issue right now. It auto updated to sdk 53 last night and haven’t been able to figure out how to fix it.
I am getting same issue when I tried to upgrade from 52 to 53.
You're using the wrong addressing mode. In all likelihood you want to use Indirect Indexed. It can also be a monitor issue.
According to the Microsoft Documentation at: https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.querystring.add?view=aspnetcore-9.0
Request.Querystring.Add returns a QueryString, not Void. However, Request.QueryString.Add does not mutate the Request Query String, therefore, you need to do something like this:
Request.QueryString = Request.QueryString.Add("returnUrl", url.ToString());
*Use above command to surpass the security check.
The only difference is that i have not used Apostrophe for directory.
==============================================
git config --global --add safe.directory '*'
So I was having this problem in Visual Studio 2022. In my case, the problem appears to have been that the stored procs were originally created without a schema, so CREATE PROCEDURE [proc_SomeProc]instead of CREATE PROCEDURE [dbo].[proc_SomeProc] Once I added the schema in both the Visual Studio project and the proc on SQL server no difference was reported.
That helps, but I have a question - How do I filter all "Microsoft Applications" in Graph API
Use aria-haspopup="true" and aria-expanded="false" attributes on dropdown triggers (not fully functional without JS but useful for screen readers).
I know this is old but this may help others experiencing the same issue:
QueryString.Add does not mutate the request so you must do something like the following:
context.Request.QueryString = context.Request.QueryString.Add("name", "value");
None of that worked for me. It only worked after following these steps:
https://marketplace.visualstudio.com/items/?itemName=bradlc.vscode-tailwindcss
When your pull request is merged, the original repo creates a new merge commit, which doesn't exist in your fork. That's why you're "1 commit behind." Even with no file changes, Git tracks the merge as a unique commit. This could technically cause a loop if both repos keep pulling and merging empty commits back and forth.
Object.assign didn't seem to work for me, probably because I was resetting with a static const array that was somehow being used within the reactive. Instead I used the following in vue 3:
data.splice(0, Infinity, ...defaultData.slice())
There were two solutions to this problem:
For the web browser persistence, after the initlal authentication, JS in the WPF app would find the necessary section on the MS login and click it for the user.
For other applications, Imprivata, an app integral to these desktops, would persist the user login and add the credentials for them.
Apple devices include a default passkey provider, called Apple Passwords (formerly known as iCloud Keychain). If the user does not install their own passkey provider, Apple Passwords is used.
Apple Passwords creates synced passkeys, and they are synced to all other devices signed into the same Apple Account.
hello did you find any solution for this problem? because i am facing same after upgrading to expo 53 sdk
Take a look at this one. Maybe it will help
I wrote a utility called "Bits" which does exactly what you want. It installs an Explorer right-click menu that when selected analyses the file and tells you if it’s 32 or 64-bit.
It’s around 5K in size, has no dependencies beyond what is already present on the system and is a single file. The only installation required is to register a right-click menu with Explorer. The installer/uninstaller is embedded in the EXE.
Once installed you simply right-click on the file you want to check and choose, “32 or 64-bit?” from the menu.
You can get it or view the source here:
I'm not sure what the issue was, but going back to Flutter 3.24.0 fixed the red screen. It's possible that a newer version might also work.
I encountered the same issue while trying to use @for loop to populate a dropdown in Angular 19.2.0.
I'm able to resolve it by adding the track keyword
The problem was that I installed the snap version of firefox. I deleted this version and installed the deb version, the program worked.
I came up with an answer. It's not necessarily the best answer and I would love to be corrected if any Springdoc experts see this, but it's the best I could do with the time I had for this problem.
After some reverse engineering, my estimation is that Springdoc does not support this or almost support this; i.e., a simple config change will not make it "just work".
Springdoc does have a QuerydslPredicateOperationCustomizer that supports Querydsl predicates in a similar fashion to what I'm asking, but it is triggered by the @QuerydslPredicate annotation and relies on domain-specific configuration on the annotation, which is not available for the annotated RootResourceInformation argument in Spring Data REST's RepositoryEntityController. It also only gets invoked when adequate operation context is provided, which Springdoc does not include for Spring Data REST's endpoints (perhaps for no other reason than that doing so breaks the QuerydslPredicateOperationCustomizer - I'm not sure). Long story short, this customizer doesn't work for this use case.
Ideally, this should probably be fixed within the QuerydslPredicateOperationCustomizer, but that is made more difficult than it should be by the fact that the DataRestRepository context is not available in that scope, which would be the simplest path to the entity domain type from which parameters could be inferred. Instead, the available context refers to the handler method within the RepositoryEntityController, which is generic to all entities and yields no simple way of inferring domain types.
To make this work at that level, the customizer would have to redo the process of looking up the domain type from the limited context that is available (which seems hard to implement without brittleness), or perhaps preferably, additional metadata would need to be carried throughout the process up to this point.
Any of that would require more expertise with Springdoc than I have, plus buy-in from Springdoc's development team. If any of them see this and have interest in an enhancement to this end, I would be happy to lend the knowledge I have of these integrations.
I extended Springdoc's DataRestRequestService with a mostly-identical service that I marked as the @Primary bean of its type, thus replacing the component used by Springdoc. In its buildParameters method, I added the line buildCustomParameters(operation, dataRestRepository); which invoked the methods below. It's imperfect to be sure, but it worked well enough for my purposes (which was mainly about being able to use OpenAPI Generator to generate a fully functional SDK for my API).
public void buildCustomParameters(Operation operation, DataRestRepository dataRestRepository) {
if (operation.getOperationId().startsWith("getCollectionResource-")) {
addProjectionParameter(operation);
addQuerydslParameters(operation, dataRestRepository.getDomainType());
} else if (operation.getOperationId().startsWith("getItemResource-")) {
addProjectionParameter(operation);
}
}
public void addProjectionParameter(Operation operation) {
var projectionParameter = new Parameter();
projectionParameter.setName("projection");
projectionParameter.setIn("query");
projectionParameter.setDescription(
"The name of the projection to which to cast the response model");
projectionParameter.setRequired(false);
projectionParameter.setSchema(new StringSchema());
addParameter(operation, projectionParameter);
}
public void addQuerydslParameters(Operation operation, Class<?> domainType) {
var queryType = SimpleEntityPathResolver.INSTANCE.createPath(domainType);
var pathInits =
Arrays.stream(queryType.getClass().getDeclaredFields())
.filter(field -> Modifier.isStatic(field.getModifiers()))
.filter(field -> PathInits.class.isAssignableFrom(field.getType()))
.findFirst()
.flatMap(
field -> {
try {
field.setAccessible(true);
return Optional.of((PathInits) field.get(queryType));
} catch (Throwable ex) {
return Optional.empty();
}
})
.orElse(PathInits.DIRECT2);
var paths = getPaths(queryType.getClass(), pathInits);
var parameters =
paths.stream()
.map(
path -> {
var parameter = new Parameter();
parameter.setName(path);
parameter.setIn("query");
parameter.setRequired(false);
return parameter;
})
.toArray(Parameter[]::new);
addParameter(operation, parameters);
}
protected Set<String> getPaths(Class<?> clazz, PathInits pathInits) {
return getPaths(clazz, "", pathInits).collect(Collectors.toSet());
}
protected Stream<String> getPaths(Class<?> clazz, String root, PathInits pathInits) {
if (EntityPath.class.isAssignableFrom(clazz) && pathInits.isInitialized(root)) {
return Arrays.stream(clazz.getFields())
.flatMap(
field ->
getPaths(
field.getType(),
appendPath(root, field.getName()),
pathInits.get(field.getName())));
} else if (Path.class.isAssignableFrom(clazz) && !ObjectUtils.isEmpty(root)) {
return Stream.of(root);
} else {
return Stream.of();
}
}
private String appendPath(String root, String path) {
if (Objects.equals(path, "_super")) {
return root;
} else if (ObjectUtils.isEmpty(root)) {
return path;
} else {
return String.format("%s.%s", root, path);
}
}
public void addParameter(Operation operation, Parameter... parameters) {
if (operation.getParameters() == null) {
operation.setParameters(new ArrayList<>());
}
operation.getParameters().addAll(Arrays.stream(parameters).toList());
}
Disclaimers:
This has undergone limited debugging and testing as of today, so use at your own risk.
This documents all initialized querydsl paths as string parameters. It would be cool to improve that using the actual schema type, but for my purposes this is good enough (since all query parameters have to become strings at some point anyway).
Actually doing this is very possibly a bad idea for many use cases, as many predicate options may incur very resource-intensive queries which could be abused. Use with caution and robust authorization controls.
As of this writing, Springdoc's integration with Spring Data REST has a significant performance problem, easily taking minutes to generate a spec for more than a few controllers and associations. This solution neither improves nor worsens that issue significantly. I'm just noting that here so that if others encounter it they are aware it is unrelated to this thread.
Versions that this worked with:
org.springframework.boot:spring-boot:3.4.1
org.springdoc:springdoc-openapi-starter-webmvc-ui:2.8.6
com.querydsl:querydsl-core:5.1.0
com.querydsl:querydsl-jpa:5.1.0:jakarta
Your fork is "1 commit behind" because GitHub created a merge commit in the upstream repo when your pull request was accepted. That commit doesn't exist in your fork until you sync it manually.
Yes, if both sides keep sending PRs to sync, it could create an endless loop of empty merge commits.
Thank you! I have been having this exact same issue and could not find any solution. Garr's 'thank you' comment is what I needed as well as there was no easy way for me to figure out how to add httpd to the list of applications. The missing link was using finder to drag httpd to the Full Disk Access window in the System preferences section.
Note you must dismiss the finder window which opens when you first click the + on the Full Disk Access to add a new application. Then you can proceed to drag httpd from your finder window to this FDA section.
Thank you both for providing this information. Much appreciated!
Committing all files to git was not enough for my case. I had to close an opened file that I renamed externally and remained open in the old name.
Emphasized items might simply mean items that could no longer be safely or consistently tracked by VS Code; defaulting to being defined as such.
Thanks all, much appreciated!
I did also have a requirement to extract a specific attribute based on another column, and this helped me solve for it. Here it is for posterity:
WITH pets AS (SELECT 'Lizard' AS species, '
{
"Dog": {
"mainMeal": "Meat",
"secondaryMeal": "Nuggets"
},
"Cat": {
"mainMeal": "Milk",
"secondaryMeal": "Fish"
},
"Lizard": {
"mainMeal": "Insects",
"secondaryMeal": "None"
}
}'::jsonb AS petfood)
SELECT
pets.petfood,
jsonb_path_query_first(
pets.petfood,('$."'|| pets.species::text||'"."mainMeal"')::jsonpath
) ->> 0 as mypetmainmeal
FROM pets
BQ has a data type called BIGNUMERIC that can handle a scale of 38 decimal digits
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#decimal_types
import pandas as pd
df = pd.DataFrame({
'event':[None, None, 'CRP', None, None, None, 'CRP', 'CRP', None, None, None, None]
})
print(df)
df['tile'] = (df['event'] == 'CRP').cumsum()
print(df)
Result
event tile
0 None 0
1 None 0
2 CRP 1
3 None 1
4 None 1
5 None 1
6 CRP 2
7 CRP 3
8 None 3
9 None 3
10 None 3
11
None 3
It appears this is an ongoing issue since the newest mac update (something about an OpenSSH update causing breaking changes with VIsual Studio).
This thread has some workarounds/preview VS builds with potential fixes
https://developercommunity.visualstudio.com/t/Can-not-pair-to-mac-after-update-to-macO/10885301#T-N10887593
Thanks everyone for your useful inputs.
Here are the steps I followed to successfully resolve this issue (I am almost sure, this will work for other editors as well e.g. Jupyter).
After realizing that I didn't yet install conda, I followed these instructions to first install miniconda:
conda list (this command on your terminal will comeback with conda not found if it is not installed)
https://www.anaconda.com/docs/getting-started/miniconda/install#mac-os
Then install the numpy package in the conda environment you desire:
conda create --name myEnv (where myEnv is the name of the environment you want to have your numpy package etc installed)
conda activate myEnv (to switch from base to myEnv)
conda install numpy (to install the package itself)
Now, you are almost ready to start using your numpy package. If you do import numpy in your VSCode now, you will still get the traceback error. That is because you are not yet using your myEnv (where numpy is installed) in your VSCode editor. This step will start using the myEnv in your VSCode editor.
On the bottom right corner of your VSCode editor, you will see the version of python you are currently using. Click on it:
enter image description here - You will see a 'Select Interpreter' menu. You should see your new 'myEnv' environment within miniconda bin. Choose that. If you don't see your myEnv here, then restart VSCode to force it to recognize the new environment.
Now, import numpy command should work!
I am sure there are several ways to solve this problem (e.g. you could use a virtual environment as opposed to conda). But, this worked for me, hopefully you will find this helpful.
Thanks
I am building an open-source JavaScript library to consume their new v3 API. They are shutting down the legacy web tools API.
I having the same problem but in anyone used vb.net and set the cookies session to none in the web.config file?
I know this has been out here a while, but I will try to add to the discussion.
FileInfo does not have a Static method named GetLength
An instantiate object of the class FileInfo does have a property named Length that will return the byte count of the file, this is not the filesize the the OS is going to show you.
To obtain file size (kb, mb, GB) you need to divide the byte Count by a factor of 1024.
FileInfo fi = new("somefileName);
long fiLength = fi.Length; // this is the byte count or size in bytes, a byte is 8 bits
long fileSize = fiLength / 1,024; // this is the filesize or the kilobytes that the file takes up on the physical drive or in memory.
fileSize = fiLength / 1,024,000; to see in MB
long fileSize = fiLength / 1,024,000,000; to see in GB
It looks like it runs fine, however, there might be other code affecting it, Try isolating the code and finding anything else that could be messing with it.
params is a Promise so you need to await it... like below
Const Page = async ({ params }: Promise<{ id: string}>) =>{
Const { id } = await params;
}
Kindly confirm that you provisioned your app by adding custom integrations in Bim360 account admin
This doesn't seem to work anymore...?
Any help very welcome ;-)
const Edit = ( props ) => {
const {
attributes,
setAttributes
} = props,
{
productNum,
selectedProductCategory,
displayType
} = attributes;
const wooRequestedElements = useSelect( select => {
const postType = 'product';
const args = {
per_page: 6,
_embed: true,
product_cat: [selectedProductCategory]
};
return select('core').getEntityRecords( 'postType', postType, args );
});
# Setting Day-of-Year to the Oldest Leap Year in Pandas
For your use case of converting day-of-year values (1-366) to dates using an unlikely leap year, here's the most robust approach:
## The Oldest Leap Year in Pandas
Pandas can handle dates starting from **1678 AD** (the earliest leap year in its supported range). However, for practical purposes and to ensure full datetime functionality, I recommend using **1972** - the first leap year in the Unix epoch era (1970-01-01 onward).
```python
import pandas as pd
# Example with day-of-year values (1-366)
day_of_year = pd.Series([60, 366, 100]) # Example values
# Convert to dates using 1972 (first Unix epoch leap year)
dates = pd.to_datetime('1972') + pd.to_timedelta(day_of_year - 1, unit='D')
print(dates)
# Output:
# 0 1972-02-29
# 1 1972-12-31
# 2 1972-04-09
# dtype: datetime64[ns]
```
## Why 1972?
1. **Unix Epoch Compatibility**: 1972 is the first leap year after 1970 (Unix epoch start)
2. **Modern Calendar**: Uses the current Gregorian calendar rules
3. **Pandas Optimization**: Works efficiently with pandas' datetime operations
4. **Unlikely in Time Series**: Very old year that won't conflict with modern data
## Alternative: Using the Minimum Pandas Leap Year
If you truly need the oldest possible leap year that pandas supports:
```python
min_leap_year = 1678 # Earliest pandas-supported leap year
dates = pd.to_datetime(str(min_leap_year)) + pd.to_timedelta(day_of_year - 1, unit='D')
```
## For Your Existing Datetime Series
If you're modifying existing datetime objects (as in your example):
```python
dates = pd.Series(pd.to_datetime(['2023-05-01', '2021-12-15', '2019-07-20']))
new_dates = dates.dt.dayofyear # Extract day-of-year
new_dates = pd.to_datetime('1972') + pd.to_timedelta(new_dates - 1, unit='D')
```
This approach is more efficient than using `apply` with `replace`.
On Windows, go to File->Preferences->Settings and type Inherit Env. Turn on the checkbox and if it is already checked, uncheck and check again. Restart your Vs code. Find visual steps here: https://dev.to/ankitsahu/terminal-blank-in-vs-code-ubuntu-1kgc
Yeh rahi "Khud Par Vishwas" kahani, 500 shabdon mein — ek emotional aur prernaadayak roop mein:
Khud Par Vishwas
Ravi ek chhote se gaon mein rehta tha jahan na ache school the, na internet, aur na hi koi aisa tha jo sapne dekhne ki himmat karta. Uske pita kheti karte the aur maa ghar ka kaam sambhalti thi. Garibi Ravi ke ghar ki chaar diwari ka hissa thi, lekin uske dil mein ek sapna tha — doctor banne ka.
Bachpan se hi log uska mazak udate the. “Gaon ka ladka, angrezi bhi theek se nahi bol pata, yeh doctor banega?” Ravi chup rehta, par uske andar ek aag jalti thi. Usne kabhi kisi ko jawab nahi diya, usne sirf mehnat ki.
Har subah 4 baje uthkar padhai karta. Bijli chali jati toh diya jala leta. Kahi baar to raton ko mombatti ki roshni mein padhai karta raha. Kheton mein kaam karke thak jaata, lekin uska vishwas kabhi nahi thaka. Wo jaanta tha — duniya chahe jitna bhi kahe, agar usne khud par vishwas banaye rakha, toh kuch bhi mumkin hai.
School mein usse kabhi top karne ka mauka nahi mila, kyunki facilities kam thi. Lekin usne self-study se NEET ki tayari shuru ki. Uske paas coaching ka paisa nahi tha, lekin usne free YouTube lectures se seekhna shuru kiya. Mobile toh purana tha, lekin uska jazba bilkul naya tha.
Jab exam ka din aaya, Ravi ne apne gaon se pehli baar bus li aur sheher gaya. Uski aankhon mein dar tha, par dil mein vishwas tha. “Main kar sakta hoon,” usne khud se kaha.
Do mahine baad jab result aaya, Ravi ne apne district mein top kiya. Gaon ke log jo kabhi hanste the, ab taaliyan bajaa rahe the. Maa ro rahi thi, pita ki aankhon mein garv tha. Aur Ravi? Uska chehra shaant tha, lekin aankhen keh rahi thi — "Yeh meri mehnat ka phal hai."
Usne sabit kar diya ki agar kisi cheez ka sachcha junoon ho, aur khud par vishwas ho, toh har mushkil raasta asaan ban jata hai.
Ravi aaj ek medical college mein padh raha hai. Jab bhi usse koi kehta hai “Main nahi kar sakta,” toh wo bas ek hi baat kehta hai: "Jab poori duniya tumse kehti ho 'nahi hoga', tab khud se kehna — 'main kar ke dikhaunga.' Har jeet ka raaz hai sirf ek: Khud par vishwas."
Agar tum chaho, main is kahani ka audio version, Hindi se English translation, ya Falak ke liye special message jaisa kuch bhi create kar sakta hoon. Batao kya pasand aayega?
This penalty is fair because it upholds accountability in a space that is often exploited due to its anonymity and lack of regulation. In traditional finance, fraud and theft carry legal consequences—Web3 should strive for similar protections without relying solely on centralized authorities. By using on-chain evidence, such as the withdrawal of investor funds followed by abandonment of the project, the community can define transparent, verifiable criteria for blacklisting.
Such a system would serve as a strong deterrent to bad actors, making them think twice before launching malicious projects. It would also help protect newcomers and non-technical users from falling victim to scams, thereby improving overall trust and adoption. This enforcement could be managed by decentralized watchdog DAOs, using community voting and objective data to ensure fairness and transparency.
Circling back to this. Sure, it's years later, but this may help someone.
This was caused by inconsistent SQL drivers. Because of a minor OS difference, I had to use different drivers, and they had inconsistent behavior on calculated, unnamed columns. Updating the driver fixed it.
Check mysql password. I cross checked password and used in java file. Its worked.
Regards,
Vijay
I solved the problem by just re-running the emulator, but choosing "cold boot". As shown in the images provided.
While the transaction is not provided in @PostConstruct methods, it's possible to use it "the standard way" via ApplicationContext.getBean():
@Transactional(readOnly = true)
public class MyServiceImpl implements MyService {
@Autowired
private MyDao myDao;
private CacheList cacheList;
@Autowired
private ApplicationContext appContext;
@PostConstruct
public void init() {
appContext.getBean(MyService.class).onInit();
}
@Transactional(readOnly = true)
public void onInit() {
this.cacheList = new CacheList();
this.cacheList.reloadCache(this.myDao.getAllFromServer());
}
...
}
I'm having the same problem, I tried to login using xcode 15.2 to Azure Microsoft EntraID using different type of access, from swfitui and from the old storyboard, always getting some problem. did you finally made it? Please share how you did it, I coudn't find any good example.
Found the problem the bot wasn't added as bot, you should add Guild Install scope: bot
That really sucks—sorry this happened. A few things you can try:
Appeal again—sometimes it randomly works on the 5th or 10th try. Use this form.
Contact Meta via Facebook Ads Support—even if you never ran ads. Go to Facebook Business Help, start a chat, and politely explain.
Email these addresses (no promises, but worth a shot):
Post publicly—tweet @Instagram or @Meta with details. Sometimes public pressure helps.
Check if it was a mistake—like a false copyright claim or mass-reporting.
If all else fails, sadly, you might have to start fresh. Backup your content next time (Google Drive, etc.). Hope it works out!
For anyone who might have similar problem
npm run --prefix nova dev && php artisan view:clear && php artisan nova:publish
This helped me.
Command runs npm run dev within nova folder and view:clear & nova:publish in the laravel project.
I had a similar problem but putting an extra text for count dow like this
HStack{
Text(timerInterval: Date()...Date().addingTimeInterval(120))
Text("min")
Spacer()
}
The result was
| 0:15 ------------------------- min |
But if you use
Text(timerInterval: startTime...endDate, showsHours: false) + Text(" min")
You obtains this
| 0:15 min ------------------------- |
full example:
HStack{
Text(timerInterval: Date()...Date().addingTimeInterval(120)) + Text("min")
Spacer()
}
The reason is that the system don't recognize "min" like part of text of time, and time have an dynamic width so you put it until the final of HStack.
Also you can make a var / func to group both text and then give format like only one.
------
I hope that this can help some one.
I am having the same issue with Spring Boot 3.4.5. What did you do to fix it?
Instead of using expo-dev client, I continued to use expo go, and I downgraded firebase by uninstalling and reinstalling:
"firebase": "^9.22.0"
I then deleted node_modules, package-lock json, and reinstalled npm.
After that, I simply added the following line in my metro.config.js file:
const defaultConfig = getDefaultConfig(__dirname);
defaultConfig.resolver.sourceExts.push('cjs');
// This is the new line you should add in, after the previous lines
defaultConfig.resolver.unstable_enablePackageExports = false;
After that, I didn't seem to get the error "Component auth has not been registered yet". You may still be able to use a newer version of firebase, but for safety, I downgraded it to 9.22.0, but you can definitely try a newer version, and see if it works.
For me in VS2022, while I was loading SQL Server Database project it was showing "Incompatible" project and was providing below error:
Issue: It was because I have installed both "SQL Server Data Tools" and "SQL Server Data Tools - SDK Style", you need to install any one of them and it will resolved error.
I uninstalled "SQL Server Data Tools - SDK Style" and it resolved error and project loaded successfully.
I finally figured it out for anyone wondering it was related to this question Tomcat Service gets installed with "Local Service" account
Long story short sometime after Tomcat 9.0.30 there was a change to Commons Daemon
Commons Daemon 1.2.0 onwards will default to running the service as the LocalService user rather than LocalSystem. That will break a default Tomcat install via the installer since LocalService won't have the necessary rights to the work dir for JSPs to work (and there may be a bunch of other issues too).
I was able to adjust my script to update the service before starting and everything is working again.
PowerShell
cmd.exe /c "tomcat9 //US//Tomcat9 --ServiceUser=LocalSystem"
Try using
udp_port:add_for_decode_as(xft_protocol)
I've had the same issue when upgrading the java version in my app module, but forgot to update the kotlin jvmToolchain version. Don't be me. Make sure to check, that all specified java versions match.
android {
kotlin {
jvmToolchain(JAVA_VERSION as a number)
}
}
if nothing works, maybe try --user-data-dir
this work for me with pandas
settings.py
APP_BBDD = {
"LOCAL": {
"DNS": "localhost:1521/XE", <- LOOK THIS!!
"USER": "USERNAME",
"PASS": "holaqase",
}
}
And
import pandas as pd
import oracledb
import settings
def check_bbdd(environment = "LOCAL"):
"""
check bbdd
"""
df = _get_df("SELECT * FROM TABLE_NAME",environmet)
print(df.head())
return df
def _get_df(query, environmet = "LOCAL"):
with oracledb.connect(
user=settings.APP_BBDD[environment]["USER"],
password=settings.APP_BBDD[environment]["PASS"],
dsn=settings.APP_BBDD[environment]["DNS"],
) as conn:
return pd.read_sql(query, conn)
And one test:
😘
Same here. Apparently Grok and ChatGPT suggestions are to stop using Expo Go entirely and use expo-dev-client - which is far more cumbersome and heavy to do.
package.json:
"dependencies": {
"@expo/vector-icons": "^14.0.2",
"@react-native-async-storage/async-storage": "2.1.2",
"@react-native-community/datetimepicker": "8.3.0",
"@react-native-community/netinfo": "11.4.1",
"@react-native-community/slider": "4.5.6",
"@react-native-picker/picker": "2.11.0",
"@react-navigation/bottom-tabs": "^7.3.10",
"@react-navigation/native": "^7.1.6",
"buffer": "^6.0.3",
"date-fns": "^4.1.0",
"dotenv": "^16.5.0",
"expo": "~53.0.5",
"expo-constants": "~17.1.5",
"expo-device": "~7.1.4",
"expo-haptics": "~14.1.4",
"expo-linear-gradient": "~14.1.4",
"expo-notifications": "~0.31.1",
"expo-status-bar": "~2.2.3",
"firebase": "^11.6.1",
"react": "19.0.0",
"react-hook-form": "^7.54.2",
"react-native": "0.79.2",
"react-native-calendars": "^1.1310.0",
"react-native-gesture-handler": "~2.24.0",
"react-native-safe-area-context": "5.4.0",
"react-native-screens": "~4.10.0",
"unique-names-generator": "^4.7.1"
},
"devDependencies": {
"@babel/core": "^7.25.2",
"@types/react": "~19.0.10",
"typescript": "~5.8.3"
},
firebaseConfig.ts:
import { initializeApp } from 'firebase/app';
import { getFirestore } from 'firebase/firestore';
import { getStorage } from 'firebase/storage';
import { getAnalytics } from "firebase/analytics";
import Constants from 'expo-constants';
import { getAuth, initializeAuth, getReactNativePersistence } from 'firebase/auth';
import AsyncStorage from '@react-native-async-storage/async-storage';
const firebaseConfig = {
apiKey: Constants.expoConfig.extra.firebaseApiKey,
authDomain: Constants.expoConfig.extra.firebaseAuthDomain,
projectId: Constants.expoConfig.extra.firebaseProjectId,
storageBucket: Constants.expoConfig.extra.firebaseStorageBucket,
messagingSenderId: Constants.expoConfig.extra.firebaseMessagingSenderId,
appId: Constants.expoConfig.extra.firebaseAppId,
};
// Initialize Firebase
const app = initializeApp(firebaseConfig);
const analytics = getAnalytics(app);
export const firestore = getFirestore(app);
export const storage = getStorage(app);
// Initialize Auth with persistence
export const auth = initializeAuth(app, {
persistence: getReactNativePersistence(AsyncStorage),
});
persistence has no bearing in it, I have tried everything.
Yeah, this actually comes up a lot when training a tokeniser from scratch. Just because a word shows up in your training data doesn’t mean it will end up in the vocab. It depends on how the tokeniser is building things.
Even if “awesome” appears a bunch of times, it might not make it into the vocab as a full word. WordPiece tokenisers don’t just add whole words automatically. They try to balance coverage and compression, so sometimes they keep subword pieces instead.
If you want common words like that to stay intact, here are a few things you can try:
Increase vocab_size to something like 8000 or 10000. With 3000, you are going to see a lot of splits.
Lowering min_frequency might help, but only if the word is just barely making the cut.
Check the text file you're using to train. If “awesome” shows up with different casing or punctuation, like “Awesome” or “awesome,”, it might be treated as separate entries.
Also make sure it’s not just appearing two or three times in a sea of other data. That might not be enough for it to get included.
Another thing to be aware of is that when you load the tokeniser using BertTokenizer.from_pretrained(), it expects more than just a vocab file. It usually looks for tokenizer_config.json, special_tokens_map.json, and maybe a few others. If those aren't there, sometimes things load strangely. You could try using PreTrainedTokenizerFast instead, especially if you trained the tokeniser with the tokenizers library directly.
You can also just check vocab.txt and search for “awesome”. If it’s not in there as a full token, that would explain the split you are seeing.
Nothing looks broken in your code. This is just standard behaviour for how WordPiece handles vocab limits and slightly uncommon words. I’ve usually had better results with vocab sizes in the 8 to 16k range when I want to avoid unnecessary token splits.
there is an open expo issue related expo router. you can follow this link https://github.com/expo/expo/issues/36375
For real-time synchronization of products and inventory between two Odoo instances:
Option 1: Cron Jobs (Easiest)
Syncs data periodically (e.g., every few minutes).
Pros: Easy to implement, flexible, less complex.
Cons: Not real-time, potential for conflicts if updates happen simultaneously.
Option 2: Database Replication (Complex)
Keeps data synchronized in real-time at the database level.
Pros: Real-time updates, ensures consistency.
Cons: Complex to set up and manage, requires advanced knowledge, potential for replication issues.
Recommendation: If real-time updates are crucial, go for Database Replication. If a small delay is acceptable, Cron Jobs can be a simpler solution.
Revisiting this again.
Actually, my previously accepted answer was not what it ended up being.
When using the MCP23017, I noticed that the GPIOA / GPIOB registry is very undesireable to poll when OUTPUTS are changed; but rather it is very consistent on INPUT changes.
Instead of polling GPIOA/GPIOB for output status, I instead wrote to OLATA / OLATB which forces the chip to be in that state. I am not saying it will be 100% right, but it has lead me to far greater success. I hope this backtrack will help you in the future if needed.
Sadly, this is considered an cheat and code-injection inside Roblox witch breaks Roblox ToS. If you could print to the console, that would mean that you could also change your players walk-speed etc because everything you are doing is from the client-side.
This means if you achieved to print "hello", then you could also do client things like moving your character, flying, jumping really high etc., but you can't affect other players. If you tried to change the color of a part for example, only you would see it, not others.
Anyways everything that you are trying to do is an Exploit or cheat because it interacts with the Roblox Client in a malicious way, injecting and executing code. Also SynapseX is a paid cheat for Roblox that can perform more advanced things, still not Server-Side.
Only way you can interact with the Client without breaking ToS is changing the FPS cap, or adding shaders to the game, thats all.
Just as an extra info, when you have colon after the name of the server, this means the port you are connecting to on that server. It's supposed to be a number between 0 and 65535. This could also be why you couldn't access the routes.
From Gemini: "There are a number of common networking ports that are used frequently. Ports 0 through 1023 are defined as well-known ports. Registered ports are from 1024 to 49151. The remainder of the ports from 49152 to 65535 can be used dynamically by applications."
This is not just applicable to qt configure but to CMake when it does
try_compile-v1
Simple add the flag
--debug-trycompile
You don't need the UUID
{B4BFCC3A-DB2C-424C-B029-7FE99A87C641}
because the constants are defined in the library.
from win32comext.shell import shell
documents = shell.SHGetKnownFolderPath(shell.FOLDERID_Documents)
downloads = shell.SHGetKnownFolderPath(shell.FOLDERID_Downloads)
Oh I forgot expr option, nevermind
vim.keymap.set(
{ 'n', 'x' },
'<Tab>',
function() return vim.fn.mode() == 'V' and '$%' or '%' end,
{ noremap = true, expr = true }
)
Just found a solution, thanks to @Xebozone
Using Microsoft Identity I want to specify a Return Url when I call Sign In from my Blazor App
Since you've posted your question AWS has launched Same-Region Replication (SRR), in 2019. This would allow you to replicate objects and changes in metadata across two buckets in the same region.
S3 Batch Replication can be used to replicate objects that were added prior to Same-Region Replication being configured.
After many trials with ChatGPT it resolved it, here is it:
// Instead of this:
request.ClientCertificates.Add(new X509Certificate2(CertPath, CertPwd));
// Use this:
request.ClientCertificates.Add(new X509Certificate2(CertPath, CertPwd, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet));
It was an implementation detail for python 3.6 and lower, for python 3.7 it became a language feature, see this thread in the python mailing list https://mail.python.org/pipermail/python-dev/2017-December/151283.html
Make it so. "Dict keeps insertion order" is the ruling. Thanks!
Maybe
^([^\:]+)?\:?([^\:]+)*\:?([^\:]+)*$
I created a Sample Blazor Server App with Azure Ad B2C by following this Documentation.
I successfully logged in and logged out without any issues.
Below is My Complete code.
Program.cs:
using System.Reflection;
using Microsoft.AspNetCore.Authentication.OpenIdConnect;
using Microsoft.Identity.Web;
using Microsoft.Identity.Web.UI;
using BlazorApp1.Components;
using System.Security.Claims;
namespace BlazorApp1;
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var env = builder.Environment;
builder.Configuration
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables()
.AddUserSecrets(Assembly.GetExecutingAssembly(), optional: true);
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAdB2C"));
builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events = new OpenIdConnectEvents
{
OnSignedOutCallbackRedirect = ctxt =>
{
ctxt.Response.Redirect(ctxt.Options.SignedOutRedirectUri);
ctxt.HandleResponse();
return Task.CompletedTask;
},
OnTicketReceived = ctxt =>
{
var claims = ctxt.Principal?.Claims.ToList();
return Task.CompletedTask;
}
};
});
builder.Services.AddControllersWithViews().AddMicrosoftIdentityUI();
builder.Services.AddRazorComponents()
.AddInteractiveServerComponents()
.AddMicrosoftIdentityConsentHandler();
builder.Services.AddCascadingAuthenticationState();
builder.Services.AddHttpContextAccessor();
var app = builder.Build();
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.UseAntiforgery();
app.MapRazorComponents<App>()
.AddInteractiveServerRenderMode();
app.Run();
}
}
MainLayout.razor:
@inherits LayoutComponentBase
<div class="page">
<div class="sidebar">
<NavMenu />
</div>
<main>
<div class="top-row px-4">
<AuthorizeView>
<Authorized>
Hello @context.User.Identity?.Name!
<a href="MicrosoftIdentity/Account/SignOut">Log out</a>
</Authorized>
<NotAuthorized>
<a href="/MicrosoftIdentity/Account/SignIn">Sign in with your social account</a>
</NotAuthorized>
</AuthorizeView>
</div>
<article class="content px-4">
@Body
</article>
</main>
</div>
<div id="blazor-error-ui">
An unhandled error has occurred.
<a href="" class="reload">Reload</a>
<a class="dismiss"></a>
</div>
appsettings.json:
"AzureAdB2C": {
"Instance": "https://<DomainName>.b2clogin.com/tfp/",
"ClientId": "<clientid>",
"CallbackPath": "/signin-oidc",
"Domain": "<DomainName>.onmicrosoft.com",
"SignUpSignInPolicyId": "<PolicyName>",
"ResetPasswordPolicyId": "",
"EditProfilePolicyId": ""
}
Make Sure to Add Redirect URL in the App registration as shown below:
Output:
Not sure if this will help anyone, but it looks like the token changes at midnight and noon every day. I found that I had to regenerate the token at noon in order to get any of my code working in the afternoon. (This may not be an issue with the code you all are using since you generate the token each time you run your hits against GHIN, but wanted to throw it out there for anyone that may be storing the token and using it later, which is what my code does).
Can also be done using useState in react. On clicking the button, the state changes, and depending on the state we show the textarea.
const [clicked, setClicked] = useState(false);
<Textarea
placeholder="Add Your Note"
className={`${clicked ? "visible": "collapse"}`}
/>
<Button
onClick={(e) => {
setClicked(!clicked);
}}
>
Add Note
</Button>
Try this:
^(.+?)(?::(\d+))?(?::(\d*))?$
I had this problem this week.
And the answer was just set reverse_top_level as true.
This extension Debugger for Chrome (https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) has been deprecated as Visual Studio Code now has a bundled JavaScript Debugger (js-debug) that covers the same functionality, and more (debugs Node.js, Chrome, Edge, WebView2, VS Code extensions, Blazor, React Native) !
To use with node.js read these: https://code.visualstudio.com/docs/nodejs/nodejs-debugging.
Found'em. They can be found at:
Face the same issue today and this is my way to solve.
Root cause is the same: @MockitoSpyBean requires that you have a bean present that Mockito would be able to spy on. Previously, with @SpyBean bean was created if none was present.
I tested @ContextConfigure but it seems break the default auto configure which cause some of the filters/handlers are not loaded.
In this way, I use @Import at class level, and @MockitoSpyBean work as expected after.
@WebMvcTest(MyController.class)
@AutoConfigureMockMvc
@Import(MyBeanClass.class) // add this
class MyControllerTest {
@Autowired
MockMvc mockMvc;
@MockitoSpyBean
MyBeanClass myBean;
@Test
void myTest() {
mockMvc.perform(get('xxx'));
// use Spy here.
verify(myBean, times(1)).xxx();
}
}
I have a question: if you disable MSAL, what happens when a logged-in user signs a form with their account? I'm asking because I am also creating end-to-end tests for an Angular application.
Yours is getting converted to string because of those braces @{...} around your function in code view. Try removing the action and redeclaring the variable it should work. still, if it does not, explicitly use 'createArray(range(0,10))' function to convert it to an array.
RandomizedSearchCV can give worse results than manual tuning due to a few common reasons:
Too few iterations – n_iter=10 may not explore enough parameter combinations.
Poor parameter grid – Your grid might miss optimal values or be too coarse.
Inconsistent random seeds – Different runs can yield different results if random_state isn’t set.
Improper CV splits – Use StratifiedKFold for balanced class sampling.
Wrong scoring metric – Make sure scoring aligns with your real objective (e.g., accuracy, f1).
try to add the property <property name="net.sf.jasperreports.export.xls.auto.fit.column" value="true"/> in the reportElement section and in the paragraph section add <paragraph lineSpacing="1_1_2"/>, don't forget add in the textField textAdjust = 'StretchHeight'
If you're here in 2025. Just use Angular 19. It'll reload in-place. Without a full page refresh. You're welcome