Try out Modernblocks, i think you might like it!! https://moblstudio.vercel.app/
I think I found what I want. I need to do the following:
import asyncio
async def long_task():
try:
async with asyncio.timeout(5):
print("Long task started")
await asyncio.sleep(10) # Simulate a long operation
return "Long task completed"
except asyncio.TimeoutError:
# Do something when you reached timout
except asyncio.CancelledError:
print("Long task cancelled")
raise
async def main():
background_task = asyncio.create_task(long_task())
# Do useful work
asyncio.run(main())
I'm having a similar issue with connecting to my supabase project using the IOS simulator.
I wonder if it is because Apple ATS sees a URL that ends in "supabase.co" and doesn't like it.
If I update my infoPlist like this, it works. I don't know if this is a good long term solution though.
"ios": {
"supportsTablet": true,
"bundleIdentifier": "com.xxxxxxx.expoapp",
"infoPlist": {
"ITSAppUsesNonExemptEncryption": false,
"NSAppTransportSecurity": {
"NSAllowsArbitraryLoads": true,
"NSExceptionDomains": {
"supabase.co": {
"NSExceptionAllowsInsecureHTTPLoads": true,
"NSIncludesSubdomains": true
}
}
}
}
},
It is recommended to use AuthOptions
instead of NextAuthOptions
for better clarity and consistency. You can import it directly using:
import { AuthOptions } from 'next-auth';
This helps align your code with the latest conventions in the NextAuth.js documentation.
export const authConfig: AuthOptions = {}
I believe you can set the specific warning codes
https://learn.microsoft.com/en-us/visualstudio/msbuild/msbuild-command-line-reference?view=vs-2022
Below the settings/Code and automation/Actions you can select which action will be allow.
Your template does not match the actual S3 layout. Omit the `<col_name>=` and it should work. Also, if the projection is enabled Athena will not consider manually added partitions.
'storage.location.template'='s3://mybucket/productA/${Model}/${Year}/${Month}/${Date}/'
Iām having the same issue right now. It auto updated to sdk 53 last night and havenāt been able to figure out how to fix it.
I am getting same issue when I tried to upgrade from 52 to 53.
You're using the wrong addressing mode. In all likelihood you want to use Indirect Indexed. It can also be a monitor issue.
According to the Microsoft Documentation at: https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.querystring.add?view=aspnetcore-9.0
Request.Querystring.Add returns a QueryString, not Void. However, Request.QueryString.Add does not mutate the Request Query String, therefore, you need to do something like this:
Request.QueryString = Request.QueryString.Add("returnUrl", url.ToString());
*
Use above command to surpass the security check.
The only difference is that i have not used Apostrophe for directory.
==============================================
git config --global --add safe.directory '*'
So I was having this problem in Visual Studio 2022. In my case, the problem appears to have been that the stored procs were originally created without a schema, so CREATE PROCEDURE [proc_SomeProc]instead of CREATE PROCEDURE [dbo].[proc_SomeProc] Once I added the schema in both the Visual Studio project and the proc on SQL server no difference was reported.
That helps, but I have a question - How do I filter all "Microsoft Applications" in Graph API
Use aria-haspopup="true" and aria-expanded="false" attributes on dropdown triggers (not fully functional without JS but useful for screen readers).
I know this is old but this may help others experiencing the same issue:
QueryString.Add does not mutate the request so you must do something like the following:
context.Request.QueryString = context.Request.QueryString.Add("name", "value");
None of that worked for me. It only worked after following these steps:
https://marketplace.visualstudio.com/items/?itemName=bradlc.vscode-tailwindcss
When your pull request is merged, the original repo creates a new merge commit, which doesn't exist in your fork. That's why you're "1 commit behind." Even with no file changes, Git tracks the merge as a unique commit. This could technically cause a loop if both repos keep pulling and merging empty commits back and forth.
Object.assign didn't seem to work for me, probably because I was resetting with a static const array that was somehow being used within the reactive. Instead I used the following in vue 3:
data.splice(0, Infinity, ...defaultData.slice())
There were two solutions to this problem:
For the web browser persistence, after the initlal authentication, JS in the WPF app would find the necessary section on the MS login and click it for the user.
For other applications, Imprivata, an app integral to these desktops, would persist the user login and add the credentials for them.
Apple devices include a default passkey provider, called Apple Passwords (formerly known as iCloud Keychain). If the user does not install their own passkey provider, Apple Passwords is used.
Apple Passwords creates synced passkeys, and they are synced to all other devices signed into the same Apple Account.
hello did you find any solution for this problem? because i am facing same after upgrading to expo 53 sdk
Take a look at this one. Maybe it will help
I wrote a utility called "Bits" which does exactly what you want. It installs an Explorer right-click menu that when selected analyses the file and tells you if itās 32 or 64-bit.
Itās around 5K in size, has no dependencies beyond what is already present on the system and is a single file. The only installation required is to register a right-click menu with Explorer. The installer/uninstaller is embedded in the EXE.
Once installed you simply right-click on the file you want to check and choose, ā32 or 64-bit?ā from the menu.
You can get it or view the source here:
I'm not sure what the issue was, but going back to Flutter 3.24.0 fixed the red screen. It's possible that a newer version might also work.
I encountered the same issue while trying to use @for
loop to populate a dropdown in Angular 19.2.0.
I'm able to resolve it by adding the track
keyword
The problem was that I installed the snap version of firefox. I deleted this version and installed the deb version, the program worked.
I came up with an answer. It's not necessarily the best answer and I would love to be corrected if any Springdoc experts see this, but it's the best I could do with the time I had for this problem.
After some reverse engineering, my estimation is that Springdoc does not support this or almost support this; i.e., a simple config change will not make it "just work".
Springdoc does have a QuerydslPredicateOperationCustomizer
that supports Querydsl predicates in a similar fashion to what I'm asking, but it is triggered by the @QuerydslPredicate
annotation and relies on domain-specific configuration on the annotation, which is not available for the annotated RootResourceInformation
argument in Spring Data REST's RepositoryEntityController
. It also only gets invoked when adequate operation context is provided, which Springdoc does not include for Spring Data REST's endpoints (perhaps for no other reason than that doing so breaks the QuerydslPredicateOperationCustomizer
- I'm not sure). Long story short, this customizer doesn't work for this use case.
Ideally, this should probably be fixed within the QuerydslPredicateOperationCustomizer
, but that is made more difficult than it should be by the fact that the DataRestRepository
context is not available in that scope, which would be the simplest path to the entity domain type from which parameters could be inferred. Instead, the available context refers to the handler method within the RepositoryEntityController
, which is generic to all entities and yields no simple way of inferring domain types.
To make this work at that level, the customizer would have to redo the process of looking up the domain type from the limited context that is available (which seems hard to implement without brittleness), or perhaps preferably, additional metadata would need to be carried throughout the process up to this point.
Any of that would require more expertise with Springdoc than I have, plus buy-in from Springdoc's development team. If any of them see this and have interest in an enhancement to this end, I would be happy to lend the knowledge I have of these integrations.
I extended Springdoc's DataRestRequestService
with a mostly-identical service that I marked as the @Primary
bean of its type, thus replacing the component used by Springdoc. In its buildParameters
method, I added the line buildCustomParameters(operation, dataRestRepository);
which invoked the methods below. It's imperfect to be sure, but it worked well enough for my purposes (which was mainly about being able to use OpenAPI Generator to generate a fully functional SDK for my API).
public void buildCustomParameters(Operation operation, DataRestRepository dataRestRepository) {
if (operation.getOperationId().startsWith("getCollectionResource-")) {
addProjectionParameter(operation);
addQuerydslParameters(operation, dataRestRepository.getDomainType());
} else if (operation.getOperationId().startsWith("getItemResource-")) {
addProjectionParameter(operation);
}
}
public void addProjectionParameter(Operation operation) {
var projectionParameter = new Parameter();
projectionParameter.setName("projection");
projectionParameter.setIn("query");
projectionParameter.setDescription(
"The name of the projection to which to cast the response model");
projectionParameter.setRequired(false);
projectionParameter.setSchema(new StringSchema());
addParameter(operation, projectionParameter);
}
public void addQuerydslParameters(Operation operation, Class<?> domainType) {
var queryType = SimpleEntityPathResolver.INSTANCE.createPath(domainType);
var pathInits =
Arrays.stream(queryType.getClass().getDeclaredFields())
.filter(field -> Modifier.isStatic(field.getModifiers()))
.filter(field -> PathInits.class.isAssignableFrom(field.getType()))
.findFirst()
.flatMap(
field -> {
try {
field.setAccessible(true);
return Optional.of((PathInits) field.get(queryType));
} catch (Throwable ex) {
return Optional.empty();
}
})
.orElse(PathInits.DIRECT2);
var paths = getPaths(queryType.getClass(), pathInits);
var parameters =
paths.stream()
.map(
path -> {
var parameter = new Parameter();
parameter.setName(path);
parameter.setIn("query");
parameter.setRequired(false);
return parameter;
})
.toArray(Parameter[]::new);
addParameter(operation, parameters);
}
protected Set<String> getPaths(Class<?> clazz, PathInits pathInits) {
return getPaths(clazz, "", pathInits).collect(Collectors.toSet());
}
protected Stream<String> getPaths(Class<?> clazz, String root, PathInits pathInits) {
if (EntityPath.class.isAssignableFrom(clazz) && pathInits.isInitialized(root)) {
return Arrays.stream(clazz.getFields())
.flatMap(
field ->
getPaths(
field.getType(),
appendPath(root, field.getName()),
pathInits.get(field.getName())));
} else if (Path.class.isAssignableFrom(clazz) && !ObjectUtils.isEmpty(root)) {
return Stream.of(root);
} else {
return Stream.of();
}
}
private String appendPath(String root, String path) {
if (Objects.equals(path, "_super")) {
return root;
} else if (ObjectUtils.isEmpty(root)) {
return path;
} else {
return String.format("%s.%s", root, path);
}
}
public void addParameter(Operation operation, Parameter... parameters) {
if (operation.getParameters() == null) {
operation.setParameters(new ArrayList<>());
}
operation.getParameters().addAll(Arrays.stream(parameters).toList());
}
Disclaimers:
This has undergone limited debugging and testing as of today, so use at your own risk.
This documents all initialized querydsl paths as string parameters. It would be cool to improve that using the actual schema type, but for my purposes this is good enough (since all query parameters have to become strings at some point anyway).
Actually doing this is very possibly a bad idea for many use cases, as many predicate options may incur very resource-intensive queries which could be abused. Use with caution and robust authorization controls.
As of this writing, Springdoc's integration with Spring Data REST has a significant performance problem, easily taking minutes to generate a spec for more than a few controllers and associations. This solution neither improves nor worsens that issue significantly. I'm just noting that here so that if others encounter it they are aware it is unrelated to this thread.
Versions that this worked with:
org.springframework.boot:spring-boot:3.4.1
org.springdoc:springdoc-openapi-starter-webmvc-ui:2.8.6
com.querydsl:querydsl-core:5.1.0
com.querydsl:querydsl-jpa:5.1.0:jakarta
Your fork is "1 commit behind" because GitHub created a merge commit in the upstream repo when your pull request was accepted. That commit doesn't exist in your fork until you sync it manually.
Yes, if both sides keep sending PRs to sync, it could create an endless loop of empty merge commits.
Thank you! I have been having this exact same issue and could not find any solution. Garr's 'thank you' comment is what I needed as well as there was no easy way for me to figure out how to add httpd to the list of applications. The missing link was using finder to drag httpd to the Full Disk Access window in the System preferences section.
Note you must dismiss the finder window which opens when you first click the + on the Full Disk Access to add a new application. Then you can proceed to drag httpd from your finder window to this FDA section.
Thank you both for providing this information. Much appreciated!
Committing all files to git was not enough for my case. I had to close an opened file that I renamed externally and remained open in the old name.
Emphasized items might simply mean items that could no longer be safely or consistently tracked by VS Code; defaulting to being defined as such.
Thanks all, much appreciated!
I did also have a requirement to extract a specific attribute based on another column, and this helped me solve for it. Here it is for posterity:
WITH pets AS (SELECT 'Lizard' AS species, '
{
"Dog": {
"mainMeal": "Meat",
"secondaryMeal": "Nuggets"
},
"Cat": {
"mainMeal": "Milk",
"secondaryMeal": "Fish"
},
"Lizard": {
"mainMeal": "Insects",
"secondaryMeal": "None"
}
}'::jsonb AS petfood)
SELECT
pets.petfood,
jsonb_path_query_first(
pets.petfood,('$."'|| pets.species::text||'"."mainMeal"')::jsonpath
) ->> 0 as mypetmainmeal
FROM pets
BQ has a data type called BIGNUMERIC that can handle a scale of 38 decimal digits
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#decimal_types
import pandas as pd
df = pd.DataFrame({
'event':[None, None, 'CRP', None, None, None, 'CRP', 'CRP', None, None, None, None]
})
print(df)
df['tile'] = (df['event'] == 'CRP').cumsum()
print(df)
Result
event tile
0 None 0
1 None 0
2 CRP 1
3 None 1
4 None 1
5 None 1
6 CRP 2
7 CRP 3
8 None 3
9 None 3
10 None 3
11
None 3
It appears this is an ongoing issue since the newest mac update (something about an OpenSSH update causing breaking changes with VIsual Studio).
This thread has some workarounds/preview VS builds with potential fixes
https://developercommunity.visualstudio.com/t/Can-not-pair-to-mac-after-update-to-macO/10885301#T-N10887593
Thanks everyone for your useful inputs.
Here are the steps I followed to successfully resolve this issue (I am almost sure, this will work for other editors as well e.g. Jupyter).
After realizing that I didn't yet install conda, I followed these instructions to first install miniconda:
conda list (this command on your terminal will comeback with conda not found if it is not installed)
https://www.anaconda.com/docs/getting-started/miniconda/install#mac-os
Then install the numpy package in the conda environment you desire:
conda create --name myEnv (where myEnv is the name of the environment you want to have your numpy package etc installed)
conda activate myEnv (to switch from base to myEnv)
conda install numpy (to install the package itself)
Now, you are almost ready to start using your numpy package. If you do import numpy in your VSCode now, you will still get the traceback error. That is because you are not yet using your myEnv (where numpy is installed) in your VSCode editor. This step will start using the myEnv in your VSCode editor.
On the bottom right corner of your VSCode editor, you will see the version of python you are currently using. Click on it:
enter image description here - You will see a 'Select Interpreter' menu. You should see your new 'myEnv' environment within miniconda bin. Choose that. If you don't see your myEnv here, then restart VSCode to force it to recognize the new environment.
Now, import numpy command should work!
I am sure there are several ways to solve this problem (e.g. you could use a virtual environment as opposed to conda). But, this worked for me, hopefully you will find this helpful.
Thanks
I am building an open-source JavaScript library to consume their new v3 API. They are shutting down the legacy web tools API.
I having the same problem but in anyone used vb.net and set the cookies session to none in the web.config file?
I know this has been out here a while, but I will try to add to the discussion.
FileInfo does not have a Static method named GetLength
An instantiate object of the class FileInfo does have a property named Length that will return the byte count of the file, this is not the filesize the the OS is going to show you.
To obtain file size (kb, mb, GB) you need to divide the byte Count by a factor of 1024.
FileInfo fi = new("somefileName);
long fiLength = fi.Length; // this is the byte count or size in bytes, a byte is 8 bits
long fileSize = fiLength / 1,024; // this is the filesize or the kilobytes that the file takes up on the physical drive or in memory.
fileSize = fiLength / 1,024,000; to see in MB
long fileSize = fiLength / 1,024,000,000; to see in GB
It looks like it runs fine, however, there might be other code affecting it, Try isolating the code and finding anything else that could be messing with it.
params is a Promise so you need to await it... like below
Const Page = async ({ params }: Promise<{ id: string}>) =>{
Const { id } = await params;
}
Kindly confirm that you provisioned your app by adding custom integrations in Bim360 account admin
This doesn't seem to work anymore...?
Any help very welcome ;-)
const Edit = ( props ) => {
const {
attributes,
setAttributes
} = props,
{
productNum,
selectedProductCategory,
displayType
} = attributes;
const wooRequestedElements = useSelect( select => {
const postType = 'product';
const args = {
per_page: 6,
_embed: true,
product_cat: [selectedProductCategory]
};
return select('core').getEntityRecords( 'postType', postType, args );
});
# Setting Day-of-Year to the Oldest Leap Year in Pandas
For your use case of converting day-of-year values (1-366) to dates using an unlikely leap year, here's the most robust approach:
## The Oldest Leap Year in Pandas
Pandas can handle dates starting from **1678 AD** (the earliest leap year in its supported range). However, for practical purposes and to ensure full datetime functionality, I recommend using **1972** - the first leap year in the Unix epoch era (1970-01-01 onward).
```python
import pandas as pd
# Example with day-of-year values (1-366)
day_of_year = pd.Series([60, 366, 100]) # Example values
# Convert to dates using 1972 (first Unix epoch leap year)
dates = pd.to_datetime('1972') + pd.to_timedelta(day_of_year - 1, unit='D')
print(dates)
# Output:
# 0 1972-02-29
# 1 1972-12-31
# 2 1972-04-09
# dtype: datetime64[ns]
```
## Why 1972?
1. **Unix Epoch Compatibility**: 1972 is the first leap year after 1970 (Unix epoch start)
2. **Modern Calendar**: Uses the current Gregorian calendar rules
3. **Pandas Optimization**: Works efficiently with pandas' datetime operations
4. **Unlikely in Time Series**: Very old year that won't conflict with modern data
## Alternative: Using the Minimum Pandas Leap Year
If you truly need the oldest possible leap year that pandas supports:
```python
min_leap_year = 1678 # Earliest pandas-supported leap year
dates = pd.to_datetime(str(min_leap_year)) + pd.to_timedelta(day_of_year - 1, unit='D')
```
## For Your Existing Datetime Series
If you're modifying existing datetime objects (as in your example):
```python
dates = pd.Series(pd.to_datetime(['2023-05-01', '2021-12-15', '2019-07-20']))
new_dates = dates.dt.dayofyear # Extract day-of-year
new_dates = pd.to_datetime('1972') + pd.to_timedelta(new_dates - 1, unit='D')
```
This approach is more efficient than using `apply` with `replace`.
On Windows, go to File->Preferences->Settings and type Inherit Env. Turn on the checkbox and if it is already checked, uncheck and check again. Restart your Vs code. Find visual steps here: https://dev.to/ankitsahu/terminal-blank-in-vs-code-ubuntu-1kgc
Yeh rahi "Khud Par Vishwas" kahani, 500 shabdon mein ā ek emotional aur prernaadayak roop mein:
Khud Par Vishwas
Ravi ek chhote se gaon mein rehta tha jahan na ache school the, na internet, aur na hi koi aisa tha jo sapne dekhne ki himmat karta. Uske pita kheti karte the aur maa ghar ka kaam sambhalti thi. Garibi Ravi ke ghar ki chaar diwari ka hissa thi, lekin uske dil mein ek sapna tha ā doctor banne ka.
Bachpan se hi log uska mazak udate the. āGaon ka ladka, angrezi bhi theek se nahi bol pata, yeh doctor banega?ā Ravi chup rehta, par uske andar ek aag jalti thi. Usne kabhi kisi ko jawab nahi diya, usne sirf mehnat ki.
Har subah 4 baje uthkar padhai karta. Bijli chali jati toh diya jala leta. Kahi baar to raton ko mombatti ki roshni mein padhai karta raha. Kheton mein kaam karke thak jaata, lekin uska vishwas kabhi nahi thaka. Wo jaanta tha ā duniya chahe jitna bhi kahe, agar usne khud par vishwas banaye rakha, toh kuch bhi mumkin hai.
School mein usse kabhi top karne ka mauka nahi mila, kyunki facilities kam thi. Lekin usne self-study se NEET ki tayari shuru ki. Uske paas coaching ka paisa nahi tha, lekin usne free YouTube lectures se seekhna shuru kiya. Mobile toh purana tha, lekin uska jazba bilkul naya tha.
Jab exam ka din aaya, Ravi ne apne gaon se pehli baar bus li aur sheher gaya. Uski aankhon mein dar tha, par dil mein vishwas tha. āMain kar sakta hoon,ā usne khud se kaha.
Do mahine baad jab result aaya, Ravi ne apne district mein top kiya. Gaon ke log jo kabhi hanste the, ab taaliyan bajaa rahe the. Maa ro rahi thi, pita ki aankhon mein garv tha. Aur Ravi? Uska chehra shaant tha, lekin aankhen keh rahi thi ā "Yeh meri mehnat ka phal hai."
Usne sabit kar diya ki agar kisi cheez ka sachcha junoon ho, aur khud par vishwas ho, toh har mushkil raasta asaan ban jata hai.
Ravi aaj ek medical college mein padh raha hai. Jab bhi usse koi kehta hai āMain nahi kar sakta,ā toh wo bas ek hi baat kehta hai: "Jab poori duniya tumse kehti ho 'nahi hoga', tab khud se kehna ā 'main kar ke dikhaunga.' Har jeet ka raaz hai sirf ek: Khud par vishwas."
Agar tum chaho, main is kahani ka audio version, Hindi se English translation, ya Falak ke liye special message jaisa kuch bhi create kar sakta hoon. Batao kya pasand aayega?
This penalty is fair because it upholds accountability in a space that is often exploited due to its anonymity and lack of regulation. In traditional finance, fraud and theft carry legal consequencesāWeb3 should strive for similar protections without relying solely on centralized authorities. By using on-chain evidence, such as the withdrawal of investor funds followed by abandonment of the project, the community can define transparent, verifiable criteria for blacklisting.
Such a system would serve as a strong deterrent to bad actors, making them think twice before launching malicious projects. It would also help protect newcomers and non-technical users from falling victim to scams, thereby improving overall trust and adoption. This enforcement could be managed by decentralized watchdog DAOs, using community voting and objective data to ensure fairness and transparency.
Circling back to this. Sure, it's years later, but this may help someone.
This was caused by inconsistent SQL drivers. Because of a minor OS difference, I had to use different drivers, and they had inconsistent behavior on calculated, unnamed columns. Updating the driver fixed it.
Check mysql password. I cross checked password and used in java file. Its worked.
Regards,
Vijay
I solved the problem by just re-running the emulator, but choosing "cold boot". As shown in the images provided.
While the transaction is not provided in @PostConstruct methods, it's possible to use it "the standard way" via ApplicationContext.getBean():
@Transactional(readOnly = true)
public class MyServiceImpl implements MyService {
@Autowired
private MyDao myDao;
private CacheList cacheList;
@Autowired
private ApplicationContext appContext;
@PostConstruct
public void init() {
appContext.getBean(MyService.class).onInit();
}
@Transactional(readOnly = true)
public void onInit() {
this.cacheList = new CacheList();
this.cacheList.reloadCache(this.myDao.getAllFromServer());
}
...
}
I'm having the same problem, I tried to login using xcode 15.2 to Azure Microsoft EntraID using different type of access, from swfitui and from the old storyboard, always getting some problem. did you finally made it? Please share how you did it, I coudn't find any good example.
Found the problem the bot wasn't added as bot, you should add Guild Install scope: bot
That really sucksāsorry this happened. A few things you can try:
Appeal againāsometimes it randomly works on the 5th or 10th try. Use this form.
Contact Meta via Facebook Ads Supportāeven if you never ran ads. Go to Facebook Business Help, start a chat, and politely explain.
Email these addresses (no promises, but worth a shot):
Post publiclyātweet @Instagram or @Meta with details. Sometimes public pressure helps.
Check if it was a mistakeālike a false copyright claim or mass-reporting.
If all else fails, sadly, you might have to start fresh. Backup your content next time (Google Drive, etc.). Hope it works out!
For anyone who might have similar problem
npm run --prefix nova dev && php artisan view:clear && php artisan nova:publish
This helped me.
Command runs npm run dev within nova folder and view:clear & nova:publish in the laravel project.
I had a similar problem but putting an extra text for count dow like this
HStack{
Text(timerInterval: Date()...Date().addingTimeInterval(120))
Text("min")
Spacer()
}
The result was
| 0:15 ------------------------- min |
But if you use
Text(timerInterval: startTime...endDate, showsHours:
false
) + Text(" min")
You obtains this
| 0:15 min ------------------------- |
full example:
HStack{
Text(timerInterval: Date()...Date().addingTimeInterval(120)) + Text("min")
Spacer()
}
The reason is that the system don't recognize "min" like part of text of time, and time have an dynamic width so you put it until the final of HStack.
Also you can make a var / func to group both text and then give format like only one.
------
I hope that this can help some one.
I am having the same issue with Spring Boot 3.4.5. What did you do to fix it?
Instead of using expo-dev client, I continued to use expo go, and I downgraded firebase by uninstalling and reinstalling:
"firebase": "^9.22.0"
I then deleted node_modules, package-lock json, and reinstalled npm.
After that, I simply added the following line in my metro.config.js file:
const defaultConfig = getDefaultConfig(__dirname);
defaultConfig.resolver.sourceExts.push('cjs');
// This is the new line you should add in, after the previous lines
defaultConfig.resolver.unstable_enablePackageExports = false;
After that, I didn't seem to get the error "Component auth has not been registered yet". You may still be able to use a newer version of firebase, but for safety, I downgraded it to 9.22.0, but you can definitely try a newer version, and see if it works.
For me in VS2022, while I was loading SQL Server Database project it was showing "Incompatible" project and was providing below error:
Issue: It was because I have installed both "SQL Server Data Tools" and "SQL Server Data Tools - SDK Style", you need to install any one of them and it will resolved error.
I uninstalled "SQL Server Data Tools - SDK Style" and it resolved error and project loaded successfully.
I finally figured it out for anyone wondering it was related to this question Tomcat Service gets installed with "Local Service" account
Long story short sometime after Tomcat 9.0.30 there was a change to Commons Daemon
Commons Daemon 1.2.0 onwards will default to running the service as the LocalService user rather than LocalSystem. That will break a default Tomcat install via the installer since LocalService won't have the necessary rights to the work dir for JSPs to work (and there may be a bunch of other issues too).
I was able to adjust my script to update the service before starting and everything is working again.
PowerShell
cmd.exe /c "tomcat9 //US//Tomcat9 --ServiceUser=LocalSystem"
Try using
udp_port:add_for_decode_as(xft_protocol)
I've had the same issue when upgrading the java version in my app module, but forgot to update the kotlin jvmToolchain version. Don't be me. Make sure to check, that all specified java versions match.
android {
kotlin {
jvmToolchain(JAVA_VERSION as a number)
}
}
if nothing works, maybe try --user-data-dir
this work for me with pandas
settings.py
APP_BBDD = {
"LOCAL": {
"DNS": "localhost:1521/XE", <- LOOK THIS!!
"USER": "USERNAME",
"PASS": "holaqase",
}
}
And
import pandas as pd
import oracledb
import settings
def check_bbdd(environment = "LOCAL"):
"""
check bbdd
"""
df = _get_df("SELECT * FROM TABLE_NAME",environmet)
print(df.head())
return df
def _get_df(query, environmet = "LOCAL"):
with oracledb.connect(
user=settings.APP_BBDD[environment]["USER"],
password=settings.APP_BBDD[environment]["PASS"],
dsn=settings.APP_BBDD[environment]["DNS"],
) as conn:
return pd.read_sql(query, conn)
And one test:
š
Same here. Apparently Grok and ChatGPT suggestions are to stop using Expo Go entirely and use expo-dev-client - which is far more cumbersome and heavy to do.
package.json:
"dependencies": {
"@expo/vector-icons": "^14.0.2",
"@react-native-async-storage/async-storage": "2.1.2",
"@react-native-community/datetimepicker": "8.3.0",
"@react-native-community/netinfo": "11.4.1",
"@react-native-community/slider": "4.5.6",
"@react-native-picker/picker": "2.11.0",
"@react-navigation/bottom-tabs": "^7.3.10",
"@react-navigation/native": "^7.1.6",
"buffer": "^6.0.3",
"date-fns": "^4.1.0",
"dotenv": "^16.5.0",
"expo": "~53.0.5",
"expo-constants": "~17.1.5",
"expo-device": "~7.1.4",
"expo-haptics": "~14.1.4",
"expo-linear-gradient": "~14.1.4",
"expo-notifications": "~0.31.1",
"expo-status-bar": "~2.2.3",
"firebase": "^11.6.1",
"react": "19.0.0",
"react-hook-form": "^7.54.2",
"react-native": "0.79.2",
"react-native-calendars": "^1.1310.0",
"react-native-gesture-handler": "~2.24.0",
"react-native-safe-area-context": "5.4.0",
"react-native-screens": "~4.10.0",
"unique-names-generator": "^4.7.1"
},
"devDependencies": {
"@babel/core": "^7.25.2",
"@types/react": "~19.0.10",
"typescript": "~5.8.3"
},
firebaseConfig.ts:
import { initializeApp } from 'firebase/app';
import { getFirestore } from 'firebase/firestore';
import { getStorage } from 'firebase/storage';
import { getAnalytics } from "firebase/analytics";
import Constants from 'expo-constants';
import { getAuth, initializeAuth, getReactNativePersistence } from 'firebase/auth';
import AsyncStorage from '@react-native-async-storage/async-storage';
const firebaseConfig = {
apiKey: Constants.expoConfig.extra.firebaseApiKey,
authDomain: Constants.expoConfig.extra.firebaseAuthDomain,
projectId: Constants.expoConfig.extra.firebaseProjectId,
storageBucket: Constants.expoConfig.extra.firebaseStorageBucket,
messagingSenderId: Constants.expoConfig.extra.firebaseMessagingSenderId,
appId: Constants.expoConfig.extra.firebaseAppId,
};
// Initialize Firebase
const app = initializeApp(firebaseConfig);
const analytics = getAnalytics(app);
export const firestore = getFirestore(app);
export const storage = getStorage(app);
// Initialize Auth with persistence
export const auth = initializeAuth(app, {
persistence: getReactNativePersistence(AsyncStorage),
});
persistence has no bearing in it, I have tried everything.
Yeah, this actually comes up a lot when training a tokeniser from scratch. Just because a word shows up in your training data doesnāt mean it will end up in the vocab. It depends on how the tokeniser is building things.
Even if āawesomeā appears a bunch of times, it might not make it into the vocab as a full word. WordPiece tokenisers donāt just add whole words automatically. They try to balance coverage and compression, so sometimes they keep subword pieces instead.
If you want common words like that to stay intact, here are a few things you can try:
Increase vocab_size to something like 8000 or 10000. With 3000, you are going to see a lot of splits.
Lowering min_frequency might help, but only if the word is just barely making the cut.
Check the text file you're using to train. If āawesomeā shows up with different casing or punctuation, like āAwesomeā or āawesome,ā, it might be treated as separate entries.
Also make sure itās not just appearing two or three times in a sea of other data. That might not be enough for it to get included.
Another thing to be aware of is that when you load the tokeniser using BertTokenizer.from_pretrained(), it expects more than just a vocab file. It usually looks for tokenizer_config.json, special_tokens_map.json, and maybe a few others. If those aren't there, sometimes things load strangely. You could try using PreTrainedTokenizerFast instead, especially if you trained the tokeniser with the tokenizers library directly.
You can also just check vocab.txt and search for āawesomeā. If itās not in there as a full token, that would explain the split you are seeing.
Nothing looks broken in your code. This is just standard behaviour for how WordPiece handles vocab limits and slightly uncommon words. Iāve usually had better results with vocab sizes in the 8 to 16k range when I want to avoid unnecessary token splits.
there is an open expo issue related expo router. you can follow this link https://github.com/expo/expo/issues/36375
For real-time synchronization of products and inventory between two Odoo instances:
Option 1: Cron Jobs (Easiest)
Syncs data periodically (e.g., every few minutes).
Pros: Easy to implement, flexible, less complex.
Cons: Not real-time, potential for conflicts if updates happen simultaneously.
Option 2: Database Replication (Complex)
Keeps data synchronized in real-time at the database level.
Pros: Real-time updates, ensures consistency.
Cons: Complex to set up and manage, requires advanced knowledge, potential for replication issues.
Recommendation: If real-time updates are crucial, go for Database Replication. If a small delay is acceptable, Cron Jobs can be a simpler solution.
Revisiting this again.
Actually, my previously accepted answer was not what it ended up being.
When using the MCP23017, I noticed that the GPIOA / GPIOB registry is very undesireable to poll when OUTPUTS are changed; but rather it is very consistent on INPUT changes.
Instead of polling GPIOA/GPIOB for output status, I instead wrote to OLATA / OLATB which forces the chip to be in that state. I am not saying it will be 100% right, but it has lead me to far greater success. I hope this backtrack will help you in the future if needed.
Sadly, this is considered an cheat and code-injection inside Roblox witch breaks Roblox ToS. If you could print to the console, that would mean that you could also change your players walk-speed etc because everything you are doing is from the client-side.
This means if you achieved to print "hello", then you could also do client things like moving your character, flying, jumping really high etc., but you can't affect other players. If you tried to change the color of a part for example, only you would see it, not others.
Anyways everything that you are trying to do is an Exploit or cheat because it interacts with the Roblox Client in a malicious way, injecting and executing code. Also SynapseX is a paid cheat for Roblox that can perform more advanced things, still not Server-Side.
Only way you can interact with the Client without breaking ToS is changing the FPS cap, or adding shaders to the game, thats all.
Just as an extra info, when you have colon after the name of the server, this means the port you are connecting to on that server. It's supposed to be a number between 0 and 65535. This could also be why you couldn't access the routes.
From Gemini: "There are a number of common networking ports that are used frequently. Ports 0 through 1023 are defined as well-known ports. Registered ports are from 1024 to 49151. The remainder of the ports from 49152 to 65535 can be used dynamically by applications."
This is not just applicable to qt configure but to CMake when it does
try_compile-v1
Simple add the flag
--debug-trycompile
You don't need the UUID
{B4BFCC3A-DB2C-424C-B029-7FE99A87C641}
because the constants are defined in the library.
from win32comext.shell import shell
documents = shell.SHGetKnownFolderPath(shell.FOLDERID_Documents)
downloads = shell.SHGetKnownFolderPath(shell.FOLDERID_Downloads)
Oh I forgot expr
option, nevermind
vim.keymap.set(
{ 'n', 'x' },
'<Tab>',
function() return vim.fn.mode() == 'V' and '$%' or '%' end,
{ noremap = true, expr = true }
)
Just found a solution, thanks to @Xebozone
Using Microsoft Identity I want to specify a Return Url when I call Sign In from my Blazor App
Since you've posted your question AWS has launched Same-Region Replication (SRR), in 2019. This would allow you to replicate objects and changes in metadata across two buckets in the same region.
S3 Batch Replication can be used to replicate objects that were added prior to Same-Region Replication being configured.
After many trials with ChatGPT it resolved it, here is it:
// Instead of this:
request.ClientCertificates.Add(new X509Certificate2(CertPath, CertPwd));
// Use this:
request.ClientCertificates.Add(new X509Certificate2(CertPath, CertPwd, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet));
It was an implementation detail for python 3.6 and lower, for python 3.7 it became a language feature, see this thread in the python mailing list https://mail.python.org/pipermail/python-dev/2017-December/151283.html
Make it so. "Dict keeps insertion order" is the ruling. Thanks!
Maybe
^([^\:]+)?\:?([^\:]+)*\:?([^\:]+)*$
I created a Sample Blazor Server App with Azure Ad B2C by following this Documentation.
I successfully logged in and logged out without any issues.
Below is My Complete code.
Program.cs:
using System.Reflection;
using Microsoft.AspNetCore.Authentication.OpenIdConnect;
using Microsoft.Identity.Web;
using Microsoft.Identity.Web.UI;
using BlazorApp1.Components;
using System.Security.Claims;
namespace BlazorApp1;
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var env = builder.Environment;
builder.Configuration
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables()
.AddUserSecrets(Assembly.GetExecutingAssembly(), optional: true);
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAdB2C"));
builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events = new OpenIdConnectEvents
{
OnSignedOutCallbackRedirect = ctxt =>
{
ctxt.Response.Redirect(ctxt.Options.SignedOutRedirectUri);
ctxt.HandleResponse();
return Task.CompletedTask;
},
OnTicketReceived = ctxt =>
{
var claims = ctxt.Principal?.Claims.ToList();
return Task.CompletedTask;
}
};
});
builder.Services.AddControllersWithViews().AddMicrosoftIdentityUI();
builder.Services.AddRazorComponents()
.AddInteractiveServerComponents()
.AddMicrosoftIdentityConsentHandler();
builder.Services.AddCascadingAuthenticationState();
builder.Services.AddHttpContextAccessor();
var app = builder.Build();
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.UseAntiforgery();
app.MapRazorComponents<App>()
.AddInteractiveServerRenderMode();
app.Run();
}
}
MainLayout.razor:
@inherits LayoutComponentBase
<div class="page">
<div class="sidebar">
<NavMenu />
</div>
<main>
<div class="top-row px-4">
<AuthorizeView>
<Authorized>
Hello @context.User.Identity?.Name!
<a href="MicrosoftIdentity/Account/SignOut">Log out</a>
</Authorized>
<NotAuthorized>
<a href="/MicrosoftIdentity/Account/SignIn">Sign in with your social account</a>
</NotAuthorized>
</AuthorizeView>
</div>
<article class="content px-4">
@Body
</article>
</main>
</div>
<div id="blazor-error-ui">
An unhandled error has occurred.
<a href="" class="reload">Reload</a>
<a class="dismiss"></a>
</div>
appsettings.json:
"AzureAdB2C": {
"Instance": "https://<DomainName>.b2clogin.com/tfp/",
"ClientId": "<clientid>",
"CallbackPath": "/signin-oidc",
"Domain": "<DomainName>.onmicrosoft.com",
"SignUpSignInPolicyId": "<PolicyName>",
"ResetPasswordPolicyId": "",
"EditProfilePolicyId": ""
}
Make Sure to Add Redirect URL in the App registration as shown below:
Output:
Not sure if this will help anyone, but it looks like the token changes at midnight and noon every day. I found that I had to regenerate the token at noon in order to get any of my code working in the afternoon. (This may not be an issue with the code you all are using since you generate the token each time you run your hits against GHIN, but wanted to throw it out there for anyone that may be storing the token and using it later, which is what my code does).
Can also be done using useState in react. On clicking the button, the state changes, and depending on the state we show the textarea.
const [clicked, setClicked] = useState(false);
<Textarea
placeholder="Add Your Note"
className={`${clicked ? "visible": "collapse"}`}
/>
<Button
onClick={(e) => {
setClicked(!clicked);
}}
>
Add Note
</Button>
Try this:
^(.+?)(?::(\d+))?(?::(\d*))?$
I had this problem this week.
And the answer was just set reverse_top_level as true.
This extension Debugger for Chrome (https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) has been deprecated as Visual Studio Code now has a bundled JavaScript Debugger (js-debug
) that covers the same functionality, and more (debugs Node.js, Chrome, Edge, WebView2, VS Code extensions, Blazor, React Native) !
To use with node.js read these: https://code.visualstudio.com/docs/nodejs/nodejs-debugging.
Found'em. They can be found at:
Face the same issue today and this is my way to solve.
Root cause is the same: @MockitoSpyBean
requires that you have a bean present that Mockito would be able to spy on. Previously, with @SpyBean
bean was created if none was present.
I tested @ContextConfigure
but it seems break the default auto configure which cause some of the filters/handlers are not loaded.
In this way, I use @Import
at class level, and @MockitoSpyBean
work as expected after.
@WebMvcTest(MyController.class)
@AutoConfigureMockMvc
@Import(MyBeanClass.class) // add this
class MyControllerTest {
@Autowired
MockMvc mockMvc;
@MockitoSpyBean
MyBeanClass myBean;
@Test
void myTest() {
mockMvc.perform(get('xxx'));
// use Spy here.
verify(myBean, times(1)).xxx();
}
}
I have a question: if you disable MSAL, what happens when a logged-in user signs a form with their account? I'm asking because I am also creating end-to-end tests for an Angular application.
Yours is getting converted to string because of those braces @{...}
around your function in code view. Try removing the action and redeclaring the variable it should work. still, if it does not, explicitly use 'createArray(range(0,10))
' function to convert it to an array.
RandomizedSearchCV
can give worse results than manual tuning due to a few common reasons:
Too few iterations ā n_iter=10
may not explore enough parameter combinations.
Poor parameter grid ā Your grid might miss optimal values or be too coarse.
Inconsistent random seeds ā Different runs can yield different results if random_state
isnāt set.
Improper CV splits ā Use StratifiedKFold
for balanced class sampling.
Wrong scoring metric ā Make sure scoring
aligns with your real objective (e.g., accuracy
, f1
).
try to add the property <property name="net.sf.jasperreports.export.xls.auto.fit.column" value="true"/>
in the reportElement
section and in the paragraph section add <paragraph lineSpacing="1_1_2"/>
, don't forget add in the textField
textAdjust = 'StretchHeight'
If you're here in 2025. Just use Angular 19. It'll reload in-place. Without a full page refresh. You're welcome
On theĀ C# side, make sure theĀ RestClient sends the correct headers:
request.AddHeader("Content-Type", "application/x-www-form-urlencoded; charset=utf-8");
On theĀ PHP side, at the top of your script (before output), force UTF-8 interpretation:
header('Content-Type: text/html; charset=utf-8');
mb_internal_encoding('UTF-8');
Also, ensure your PHP script correctly reads the POST parameters:
$content = json_decode($_POST['content'], true);
Double-check yourĀ MySQL connection:
$this->db->exec("SET NAMES utf8mb4");
$this->db->exec("SET CHARACTER SET utf8mb4");
I made my BuildConfig in modules enabled using this article, the article also has a guide on how to improve build speed
https://medium.com/androiddevelopers/5-ways-to-prepare-your-app-build-for-android-studio-flamingo-release-da34616bb946
The issue was indeed related to the apache-airflow-providers-fab package, as suggested by @bcincy's comment.
x-airflow-common:
&airflow-common
# ... other common settings ...
environment:
&airflow-common-env
# ... existing environment variables (including AIRFLOW__CORE__BASE_URL) ...
# Modified this line to add the FAB provider package:
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-} apache-airflow-providers-fab==2.0.2
# ... rest of environment variables ...
docker compose down
docker compose up -d
I think this question needs more context, but one thing you could try is making use of Rails Runner.
+91 87557 77796, +91 95686 89357, +91 95540 13165, +91 94571 74884, +91 80120 38786, +91 98378 96100, +91 94123 30686, +91 97190 45294, +91 97197 72513, +91 99177 86392, +91 98342 40677, +91 98973 24185, +91 94571 90772, +91 63986 25524, +91 98377 38585, +91 99172 41653, +91 99271 87883, +91 96907 62144, +91 96754 92261, +91 94123 09103, +91 97585 35997, +91 86250 94685, +91 95998 99864, +91 85528 65736, +91 78278 15994, +91 96392 72140, +91 79061 31834, +91 96905 85136, +91 86507 16839, +91 63628 09817, +91 63950 96838, +91 63951 15270, +91 63972 01837, +91 63972 68653, +91 63977 51658, +91 63981 02240, +91 63988 63109, +91 70117 75544, +91 70172 72090, +91 70176 17767, +91 70554 85235, +91 70555 89357, +91 70557 36024, +91 721 728 6377, +91 721 733 4195, +91 724 803 0008, +91 72519 20111, +91 73009 34001, +91 73023 97196, +91 731 059 0216, +91 731 099 7665, +91 734 900 0129, +91 73510 31647, +91 73515 06076, +91 73519 36786, +91 7417 293 795, +91 7452 904 045, +91 7455 908 425, +91 75000 00422, +91 75000 18486, +91 75050 27092, +91 75050 33461, +91 75052 84627, +91 75053 72659, +91 75799 48911, +91 76683 21635, +91 78179 68756, +91 78300 98393, +91 78305 22840, +91 78389 47410, +91 788 800 3968, +91 78950 14790, +91 78955 26916, +91 79061 02749, +91 79066 81923, +91 79836 97306, +91 80060 09725, +91 80066 68781, +91 80570 87354, +91 8077 929 905, +91 81264 18361, +91 81266 78143, +91 81267 66859, +91 821 896 7700, +91 82728 12383, +91 82798 38720, +91 83830 88347, +91 83848 06678, +91 83950 49381, +91 84330 53724, +91 84331 84648, +91 84331 93681, +91 84455 08325, +91 84479 26124, +91 84496 74052, +91 84778 66430, +91 84779 83763, +91 85348 58937, +91 85350 33103, +91 85957 71615, +91 86288 68784, +91 863 035 3191, +91 863 085 6889, +91 863 094 4758, +91 87553 11496, +91 87553 93649, +91 87554 43950, +91 88648 22048, +91 88648 33412, +91 88829 48018, +91 89097 85103, +91 89230 63516, +91 89582 23220, +91 89582 32458, +91 89583 70086, +91 89587 82994, +91 89791 02297, +91 90124 93631, +91 90125 73882, +91 90126 21047, +91 90127 72332, +91 90258 58570, +91 90273 26250, +91 90273 89289, +91 90452 55612, +91 90455 83369, +91 90458 65785, +91 90580 53643, +91 90588 97682, +91 90841 47786, +91 91055 46813, +91 91197 63429, +91 91491 12634, +91 93115 89765, +91 93898 71844, +91 94100 11589, +91 94100 12636, +91 94110 71394, +91 94116 11529, +91 94116 16273, +91 94116 73896, +91 94118 13872, +91 94119 95499, +91 94121 44490, +91 94124 29791, +91 94126 36208, +91 94518 18125, +91 94573 14095, +91 94588 18948, +91 95367 00458, +91 95482 11312, +91 95570 15786, +91 95571 20143, +91 95574 72557, +91 95684 90993, +91 95685 61428, +91 95688 27485, +91 95689 30764, +91 96201 49333, +91 96274 47200, +91 96276 03899, +91 96344 30596, +91 96346 12651, +91 96347 47123, +91 96392 78310, +91 96393 95536, +91 96394 71745, +91 96394 99786, +91 96395 59208, +91 96399 15342, +91 96545 11910, +91 96734 35466, +91 96753 87915, +91 96903 44962, +91 96906 08460, +91 96907 01140, +91 96909 91996, +91 97196 75927, +91 97197 78670, +91 97200 77515, +91 97200 92444, +91 97209 90529, +91 97562 55078, +91 97565 02382, +91 97567 07737, +91 97567 63615, +91 97580 97371, +91 97584 36199, +91 97588 29064, +91 97591 58029, +91 97592 57998, +91 97594 32444, +91 97597 35322, +91 97598 82920, +91 97603 00706, +91 97603 44301, +91 97604 61431, +91 97606 64133, +91 97610 02603, +91 97615 56908, +91 97617 86440, +91 97618 01195, +91 98088 89517, +91 98219 24622, +91 98224 18477, +91 98370 54097, +91 98370 78504, +91 98371 09891, +91 98371 66760, +91 98372 47692, +91 98374 63815, +91 98376 58612, +91 98377 27987, +91 98378 19119, +91 98378 56293, +91 98714 86733, +91 98732 54422, +91 98755 48400, +91 98970 64813, +91 98979 11434, +91 98979 45407, +91 99112 14984, +91 99113 72284, +91 99115 79608, +91 99172 35073, +91 99173 24185, +91 99176 45305, +91 99179 43196, +91 99271 15236, +91 99272 73812, +91 99276 13099, +91 99278 05631, +91 99278 55786, +91 99279 69535, +91 99531 70863, +91 99903 65206, +91 99973 62696, +91 99975 99023, +91 99977 58067, +91 99978 69442
I had the same error when my nuget packages were set to different versions, Microsoft.SemanticKernel.Connectors.AzureOpenAI was 1.47.0 and other was 1.48.0. Updating to the same, 1.48.0 fixed the issue. Make sufre, all your Nuget packages related SemanticKernel are on the same version.
Thanks @Thom, @Siggemannen for your insights.
I tried in my environment by saving the .csv
file in UTF-8 format and import the data into Azure SQL DB and it works well by saving the data in the same format into my Azure SQL Database successfully as shown in the below output.
I tried to save the below .csv
data with UTF-8 format into Azure SQL DB.
Id,Description
1,"Price is £100"
Output:
try this pnpm dlx tailwindcss@3 init
,it should be ok!