Newto developing so I'm in the beginning stages! Would love to know the process for setting up an e sim though. Where can I find the download for flutter or where would I start in learning the basics? . TIA 😊
Just open the built.gradle.kts (Module:app) file and add this line on dependencies : implementation("com.google.firebase:firebase-database:20.3.0") and click on :synch now.
In my case the issue was that the CSS was not shared properly. So charts where there, just not seen because of some CSS classes required to be shared as well.
The index.css and everything setup in there, including the Tailwind setup, if you use it, will not be shared. What's in App.css would.
UPD: It works fine. The problem was I was testing the noise on a plane with few thouthands vertices. Testing it on a plane with absurd resolution showed me that the initial solution was fine.
Any fix for this? I am having the same error at the moment and I am frustrated. My IOS app works fine but my Android app is the one having the issue.
What you're seeing is an artifact of VS's fast mode contaienr debugging (see here for details). The image / container used for debugging in the Debug configuration only works within VS. If you right-click on the Dockerfile you can build a full image that is usable outside of VS.
I ended up finding SaveCopyAs which solved my problem. It creates the copy without bringing me over to that copy.
I finally addressed this issue by downgrading Next.js from 15 to 14. It seems Next.js 15 is not very robust.
I dont know the use case of your code, but here is what worked for me when I tried to create a figure with 2 rows 4 cols and 3 graphs. Use this:
ax3=fig.add_subplot(2,2,(3,4))
Look in the browser for urql cache warnings. You may need to query an id
property on the entity, especially if the same Type is being queried from another part of your app.
Instead of 4 lines you can just type
String s= rootnodeResolution.iterator().next().path("id").asText();
5 years late but I'm facing the same issue on my first big project, resources tackling this topic are very limited and I tried every solution I found, I still face "Initial Response Status: 401" and failed to retrieve session cookies, I don't know what to do now lol.
sce.trustAsHtml will allow SQL injection attacks , what can be the best solution?
Below is the correct syntax.
SELECT STR_TO_DATE(c.date, '%d/%m/%y') AS parsed_date
FROM concertlistts AS c;
OK, it turns out that the post-7.0 version of nbconvert has a native flag --embed-images
nbconvert --to html --embed_images notebook.ipynb
is the solution
This code works; you can try it. However, there is no need for the title to be readonly
CustomTitleStrategy
@Injectable({providedIn: 'root'})
export class CustomTitleStrategy extends TitleStrategy {
constructor(private title: Title, private translate:TranslateService) {
super();
}
updateTitle(routerSnapshot: RouterStateSnapshot) {
const title = this.buildTitle(routerSnapshot);
if (title) {
const translatedTitle = this.translate.instant('titles.site-name') + ' | ' + this.translate.instant(`titles.${title}`);
this.title.setTitle(translatedTitle);
} else {
const translatedTitle = this.translate.instant('titles.site-name');
this.title.setTitle(translatedTitle);
}
}
}
In Angular, define a provider to override the default TitleStrategy. Use CustomTitleStrategy as the implementation by specifying it with the provide and useClass properties in the provider configuration
export class CoreModule {
static forRoot(): ModuleWithProviders<CoreModule> {
return {
ngModule: CoreModule,
providers: [
{
provide: TitleStrategy,
useClass: CustomTitleStrategy,
},
]
};
}
}
language json
"titles": {
"site-name": "stackoverflow",
"other": "Other Code"
}
have you found answer for this? im experiencing the same issue with 4.8 framework
Have you been editing any YAML files associated with that job or on the host of the Jenkins? This error occurs only if a YAML config file is configured improperly.
"This YAML error is typically caused by incorrect indentation or nesting of data structures. YAML relies on indentation to denote structure and relationships."
https://www.geeksforgeeks.org/how-do-i-resolve-a-mapping-values-are-not-allowed-here-error-in-yaml/
The answers above did not work for me using Bootstrap 5.2, but this one did: https://stackoverflow.com/a/71738702
VS Code has folding, but is dependent on language. For Javascript:
//#region
//#endregion
There is a nice table located in VS Code documentation.
Did you manage to solve this? If so, how? I'm having the same issue with expo-notifications. The expo notification tool and endpoint (https://api.expo.dev/v2/push/send) return 200 but on the dashboard (https://expo.dev/accounts/[org]/projects/[project]/push-notifications) every notification is marked as failure and there is no log where i can check what is wrong with my config (I followed the official docs though)
Apparently I was using a value when I should have used a pointer and my IDE did not detect the problem either:
func (p *Service) NewError(ctx context.Context, err error) (r *api.ErrorStatusCode) {
var securityError *ogenerrors.SecurityError
if ok := errors.As(err, &securityError); ok {
// This works now
log.Println("this is a security error", err)
}
}
When you put a semicolon, he finish the code, so when you put after parenthesis you finish the for loop and then print the last 'i', that is 5
Having been stuck on this for 2 weeks my project manager got many of my colleagues to brainstorm.
You need to import a certain Starfield certificate into your trust store.
From dev (connected over the internet) this gets imported automatically when you do a transfer.
In test (mimicking live with Direct Connect and no internet) you need to import the certificate manually.
/* The following formula will return a column as an array based on the arguments to the =arrayFormula(). Be sure to delete the "Variable" placeholders.
"_Column_Title" = The heading at the top of the column in the last frozen row.
"_Array_Formula" = The formula to be converted into an array. SPECIFY A RANGE NOT JUST A SINGLE CELL.
"_KEY_COL_LETR_RNG" = The "key" value column usually contains a value in each row, usually used in the =arrayFormula().
"_COLS_QTY" = the number of columns to generate, almost always just one (1).
How it works:
The curly braces surrounding the entire formula create an array starting with the column title followed by the data generated from the =array_Constrain().
The =array_Constrain() surrounding the =arrayFormula() limits the length of the =arrayFormula() column to the last cell containing data within the "key" value column.
The =max() formula determines the last row number in the "key" column that is NOT blank, then subtracts the row number of the "title" label, which also contains this formula. This supplies the "number of rows" argument to the =array_Constrain() formula.
The =arrayFormula() provides the first argument to the =array_Constrain() formula by generating an array based on the application of enclosed fomula for each row there is corresponding data for the enclosed formula to use. If you are not familiar with the =arrayFormula(), please use the many available posts on that subject to further inform yourself.
*/
={"Column_Title__"; Array_Constrain( ArrayFormula( xxx_Array_Formula__ ), Max( ArrayFormula( if( KEY_COL_LETR_RNG_="", "", row(KEY_COL_LETR_RNG_) ) ) ) - row(), 1 COLS_QTY___ ) }
Using FVM you can manage and change you're machine's default and have VS code reference that default path so it's always synced between your terminal and IDE.
Edit your VS Code settings.json
with
{
"dart.flutterSdkPath": "/Users/user-name/fvm/default/bin",
}
or whatever your fvm default path is
Here is an example of code that can help you:
import asyncio
sync def coro_function():
return 2 + 2
def async execute():
async with asyncio.TaskGroup() as group:
group.create_task(coro_function())
asyncio.run(execute())
The code checks the value of x. If it’s not 2 and not 3, it prints the first message.
If x is 2 or 3, it prints the second message.
it should be inside the assets folder. Let me know if it worked.
could you solve it? I have the same problem :_(
Answering my own question: I wasn't paying attention to which router I was using (pages vs app router). I've converted my project from the pages router (which does not support server actions) to the app router.
For me, the way to fix was to change the black-formatter.importStrategy
to fromEnvironment
instead of useBundled
. They update the black formatter version in the extension every now and again, and if it's a different version than your command line version they may disagree, so it's better IMO to just use the black
from the environment.
Not sure how to close this question without a comment, but i've managed to get a solution working with the code used at the top of the question, in the Edit section.
it uses Python 'in' so it should be like this
IF classic in ${list}
I ran into this error when using podman-compose to setup a Postgres container. This issue only happened on a MacOS machine, but it could be more general. The fix was to increase the amount of memory used by the podman VM:
podman machine stop
podman machine set --memory $desired_memory_in_MB
podman machine start
Usually the error message will include some number of bytes it tried to allocate for shared_buffers
, so set $desired_memory_in_MB
to something larger than that.
Additionally, you can limit the amount of memory consumed by the container by adding limits to the command
section of your docker-compose.yml
db:
image: "docker.io/postgres:16.1-alpine"
...
command:
- "postgres"
- "-c"
- "shared_buffers=2G"
[1] - Increase Podman memory limit
[2] - How to customize the configuration file of the official PostgreSQL Docker image?
What I put in snowflake for have this return “b”, “h” or “j”, for exemple?
When decommissioning a service in Terraform, it's crucial to follow a step-by-step process to avoid breaking the Terraform state and plan checks. This guide will walk you through how to safely remove resources while keeping the provider and backend configuration intact until the final step.
Step 1: Remove Service Resources from Terraform Configuration
First, do not delete the entire module folder or main.tf file just yet. Instead, go into the module folder for the service you want to decommission.
Identify the resources specific to the service (e.g., aws_instance, aws_security_group, etc.) in the directory.
Comment out or delete only the resource definitions for the service you want to decommission.
For example:
# resource "aws_instance" "example_service" {
# ami = "ami-12345678"
# instance_type = "t2.micro"
# ...
# }
Step 2: Run Terraform Plan to Validate Changes
terraform plan
Step 3: Commit Your Changes to Your Branch and Open a PR
Step 4: Get Your PR Approved and Deploy the Changes
Step 5: Clean Up Backend Configuration and Workspace (if applicable)
Why Keep the Provider and Backend Configuration Until Now? The backend configuration is how Terraform knows where to store the state of your infrastructure. If you remove the backend configuration too early, Terraform will lose access to the state file, and it won’t be able to track which resources still exist or need to be destroyed. This can lead to orphaned resources that Terraform can no longer manage, increasing the risk of drift in your infrastructure.
The provider configuration is necessary to communicate with your cloud resources. Removing it too soon would break Terraform’s ability to connect to the cloud provider, preventing it from destroying the existing resources properly.
Now that all resources have been destroyed, you can proceed to remove the provider and backend configurations.
Create a new branch (or use the existing one).
Assuming you have previously removed all resource files and are only left with the provider and backend configs, you can proceed to remove those files from the directory along with the directory itself. If this directory has a workspace defined for Terraform Cloud, you can proceed to delete that workspace as well.
Commit your changes to your branch and open a PR.
Get your PR approved and merge the changes.
Summary By following these steps, you'll ensure that the service is decommissioned properly without breaking your Terraform state or plan checks. Always remember to keep the provider and backend configuration in place until after the resources are fully destroyed. Removing these configurations prematurely can cause Terraform to lose access to the state file, resulting in orphaned resources and infrastructure drift that can be challenging to clean up later.
A friend has the same problem, so he created this to resolve the issue, here is the code. I hope this could help you.
For me changing from "HDR=NO;" to "HDR=YES;" in following OleDb Connection String resolved the issue:
Provider=Microsoft.ACE.OLEDB.12.0;Data Source='Excele file location in File System';Extended Properties="Excel 12.0 Xml;HDR=YES; IMEX=1";
Could you give us your FunctionContributor.contributeFunctions method implementation ? I had the same pb when i migrate from spring boot 3.1 to 3.2. My custom functions implementation, i wanted to register were incorrects.
before :
public class JsonBPathMatchCustomFunction extends StandardSQLFunction {
@Override
public void render(SqlAppender sqlAppender, List<? extends SqlAstNode> sqlAstArguments, SqlAstTranslator<?> translator) {}}
After :
public class JsonBPathMatchCustomFunction extends StandardSQLFunction {
@Override
public void render(SqlAppender sqlAppender, List<? extends SqlAstNode> sqlAstArguments, ReturnableType<?> returnType, SqlAstTranslator<?> translator) {}}
The "before" override render method is deprecated forRemoval=true, in hibernate-core 6.4.10 version.
If you are in the same situation, i have to choose the "after" override render method.
In my situation, it solved my pb.
this is not me, someone that I do not know, I would not have that degree of expertise to make a request of that nature
You have to update the shopifyAPI library so that the newest API versions get included in the library:
pip install --upgrade shopifyAPI
Thanks to @C3roe for getting me on the right track! I ran this in the console and now my menu stays open!
document.querySelector('.wp-block-navigation__responsive-container')?.classList.add('has-modal-open', 'is-menu-open');
warning 1909 could not create shortcut node.js commond prompt.lnk. verify that destination folder exits and that you can access it
solution :) Step 1:) Open Environment variables Settings Step 2:) Go to the system variables Step 3:) Remove "ComSpec" variable Step 4:) Open CMD command Prompt as a Administrator Mode
Step 5:) type Command-> npm install npm@latest
after follow these 5 steps this problem will be solved then after that you can run these command node -v npm -v npx -v after that you can easily create a ReactJs project thank u
Using shallowRef
instead of ref
seems to work. We don't need Vue's reactivity system to deeply track the internal properties of the PDFDocumentProxy
object.
never mind - debugged/researched, we can achieve by means of this
for content in resp.get('Contents', []):
if (
content['Key'] and content['LastModified'] <= datetime.now().astimezone() - timedelta(days=15)
):
yield content
Neural networks are eager learning algorithms. I hope this was made clear in (either of) the following:
lamda
to lambda
Range
)df3 = (pd.DataFrame(data)
.groupby('boyorgirl')['testscore']
.agg(['mean', 'sum', 'min', 'max'])
.assign(Range=lambda x: x['max'] - x['min'])
.reset_index())
$('.btn-export').on('click',function(){
$.ajax({
url:"<?php echo admin_url('admin-ajax.php') ?>",
type:"POST",
data:{
action:"export_news_cpt",
},
success:function(response)
{
if (response !== 0) {
Swal.fire({
icon: "success",
title: "Downloaded",
text: "Thankyou for downloading this file",
});
var link = document.createElement('a');
link.href = 'data:text/csv,' + encodeURIComponent(response);
link.download = 'file.csv';
link.click();
} else {
Swal.fire({
icon: "error",
title: response.data.message,
text: "Something went wrong!",
});
}
},
error : function (xhr , status,error)
{
alert("AJAX Response:"+error);
},
});
});
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script>
add_action('wp_ajax_nopriv_export_data', 'export_data');
add_action('wp_ajax_export_data', 'export_data');
function export_data()
{
if ($_SERVER["REQUEST_METHOD"] == "POST") {
$args = [
'post_type' => 'book',
'posts_per_page' => -1
];
$query = new WP_Query($args);
if ($query->have_posts()) {
$csv = 'Post ID,Post Type,Post Title,Post DateTime,Book Author ,Publisher Name,Price,Genre,Post Thumbanil ';
while ($query->have_posts()) {
$query->the_post();
$post = $query->post;
$post_id = get_the_ID();
$post_type = get_post_type();
$post_title = get_the_title();
$post_date_time = get_the_date() . '' . get_the_time();
$post_book_author = get_post_meta($post_id, 'author_name', true);
$post_publisher_name = get_post_meta($post_id, 'publisher_name', true);
$post_book_price = get_post_meta($post_id, 'book_price', true);
$post_book_genre = get_post_meta($post_id, 'book_genre', true);
$post_thumbnail_url = get_the_post_thumbnail_url($post_id);
$csv .= '"' . $post_id . '","' . $post_type . '","' . $post_title . '","' . $post_date_time . '","' . $post_book_author . '","' . $post_publisher_name . '","' . $post_book_price . '","' . $post_book_genre . '","' . $post_thumbnail_url . '"' . '"\n';
}
wp_reset_postdata();
header("Content-type: text/csv");
header("Content-Disposition: attachment; filename=file.csv");
echo $csv;
exit;
} else {
echo 0;
}
} else {
wp_send_json_error(array("message" => "REQUEST METHOD POST"));
}
}
When I try to change to port 80/443, I get the warning: • /sbin/launchd is blocking port 80
This never happened before installing MAMP Pro 7. Is there any way to free port 80 so that Apache works?
public OnClickListener saveButtonListener = new OnClickListener(){
public void onClick(View v){
addDetailedActivity(ETinfo.getText().toString());
ETinfo.setText("");
}
}
when i re-open vscode, it goes back to original
Figured it out. Looks like it was my fault, I think the database was locked by my external table editor but an error of that kind was never thrown in Python.
The only issue I can imagine is the importing Base from some other module. There could be multiple session created without a proper binding which can cause rollback to malfunction. Declare a new base with test models to properly isolated the test environment. If that solve the issue we may debug the rest
We can also explore library like model-bakery which might help us isolating our model although I'm not sure about it
In the context i work, we did felt some slowness in September/October. Our batch has the size a slightly larger than yours, and we execute 5 batches in parallel. In good days, it takes 7 hours and a half to reach finalized state.
While we are not getting stuck in preprocessing in recent executions (during this month), we do get stuck in finalizing state. If the process stays X hours in this state, we delete the batch and call it a timed-out process. The same was done in preprocessing state if it stays too long there. Thats all we can do about that so far to make sure we dont get to a never ending process.
We did not had from them any info regarding endpoint updates.
When you said:
I'm pretty sure mailchimp deletes any outstanding batches 14 days after they were created
Do you have any documentation on that? I did not find anything related to that in mailchimp documentation.
And do you know technically what happens in preprocessing status? I am trying to decipher what is 'smaller operations' here.
Sorry to present more questions than answers :)
I did get this figured. In the Windows Advanced Audit Policy, aside from auditing Removable Storage, I also needed to audit "File System" which is not ideal, but I do now get the actual folder created. For others attempting to audit removable storage, you also need to following registry setting set to 1.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Storage\HotplugSecureOpen
This setting also seems to require a reboot.
The following combo might also be helpful:
bin
and obj
folders.dotnet nuget locals all --clear
.dotnet restore
.In your second code snippet you have a semicolon ;
after the for loop for (i = 0; i < 5; i++);
.
i++
, but body is empty due to ;
, so nothing happens.i
becomes 5
, causing loop to terminate.try this ChromeOptions options = new ChromeOptions(); options.AddArgument("--no-first-run"); options.AddArgument("--disable-features=OptimizationGuideModelDownloading,OptimizationHintsFetching,OptimizationTargetPrediction,OptimizationHints");
There’s a good chance you’re using an outdated version of the Apache Beam library. Try to upgrade to the latest version of Apache Beam. So in your requirements.txt file, specify a more recent version. Then rerun pip install -r requirements.txt in your GitHub Actions workflow. Probably at least Beam 2.46.0 or later should resolve this issue.
You can use the following formula
=IF(MOD(ROUNDDOWN((ROW()-1)/2,0),2)=MOD(ROUNDDOWN((COLUMN()-1)/2,0),2),
INDEX($B$1:$B$2,MOD(ROW()-1,2)+1,MOD(COLUMN()-1,2)+1),
0)
Thx
I don't believe there is a way to pass the join table information into the delegate. You will need to create a graph instance in your static function and use it to search the database via BQL using the Primary DAC keys.
Use defer: The defer attribute makes sure the script is executed after the HTML is parsed.
Use async: The async attribute allows the script to load and execute asynchronously.
Solved. The issue was not in my code. The issue was upstream in the bee-queue
library. The promise was not being caught and rejected to the downstream promise chain.
See this PR for more info: https://github.com/bee-queue/bee-queue/pull/878
Propuesta para Premeditado.
Hola, encantada de saludarte.
Quería escribirte porque me ha parecido interesante comentar contigo la posibilidad de que Premeditado aparezca cada mes en periódicos digitales como noticia para posicionar en los primeros lugares de internet, es decir, con artículos reales dentro del periódico que no se marcan como publicidad y que no se borran.
La noticia es publicada por más de cuarenta periódicos de gran autoridad para mejorar el posicionamiento de tu web y la reputación.
¿Podrías facilitarme un teléfono para ofrecerte un mes gratuito?
Gracias.
Imagine you're a new node, and you've just connected to the network to download the blockchain. You get data for multiple different blockchains, which one is valid? The genuine blockchain is the oldest, but it's not obvious. We have to add time to the blockchain to prove it's age. Work is energy over time, therefore, POW is proof of time.
The only reason it's applied to a block hash is to create a delay, the block interval, and to enable the network to regulate it. It does not secure block data any better than a natural hash. POW can be applied to anything in a block and it would work just as well.
Closing and Restarting ALL Visual Studio Code windows worked for me.
In C++20, the std::chrono library does not allow direct arithmetic operations (like adding days) on std::chrono::year_month_day. However, you can achieve this by combining std::chrono::year_month_day with std::chrono::sys_days, which is a std::chrono::time_point specialization that represents a day in the Gregorian calendar. Here's how you can do it:
Convert your std::chrono::year_month_day to std::chrono::sys_days. Add one day to the sys_days object. Convert the resulting sys_days back to std::chrono::year_month_day.
I was facing this issue, in vs code it kept installing forever. Meanwhile the app appeared on emulator but crashes as soon as i tried to opened it.
Turned out the package name in MainActivity.kt was not the same as my application id.
Ensure the applicaiton id is same in andorid/app/build.gradle and Android/app/src/main/kotlin/MainActivity.tk
Same problem here.
Salesforce authenticator on my phone can show my heroku account, with a 6-digit code, Heroku, and my name. However, everytime I try to log in, no notification pops up on my phone. Every time I insert the code, it fails.
I completed the setup of adding Salesforce Authenticator by Manage Account -> Manage Multi-Factor Authentication -> One-Time Password Generator -> scanning QR code with my phone.
It seems that my phone know the existance of my Heroku account. It just doesn't help me to log in.
You can just add an additional backtick to the suggestion
block to differentiate it from the code blocks, as shown below:
````suggestion
```
Added new lines and content here, with more back-ticks ```
````
Source: https://github.com/github/markup/issues/1687#issuecomment-1836916420
after these 2 steps this problem will be solved
If you use SetState inside the onSelected help you?
onSelected: (SampleItem item) {
setState(() {
selectedItem = item;
});
},
Need to be StateFull.
More here: https://api.flutter.dev/flutter/material/PopupMenuButton-class.html
Turns out I had two settings in conflict. Using this thread (Visual Studio Code - Convert spaces to tabs) I found that I had to uncheck both of these editor items:
I only had the 'insert spaces' option unchecked. It seems that even though I used the tab character, if VSCode has the 'Editor: detect indentation' option checked, it will detect tab characters but will default to converting tabs to spaces still. So every new file will use spaces. Both need to be unchecked to use the tab character, allow tab stops locations to be respected, and keep that setting for new script files.
Properly implementing repeating conic/sweep gradients in Chrome's PDF generator was a TODO. Fixing this was tracked as Chromium 374253366 and fixed with [pdf] Implement sweep gradient tilemodes in Chrome 132.0.6828.0 and later.
Unfortunately, the only workaround before this Chromium change is to use a conic-gradient
and manually write out the repeats.
In addition to above, you can directly use the if
condition as show below
if 'pythontest' in config[section]
Please refer https://docs.python.org/3/library/configparser.html for different methods with ConfigParser class.
Just use the "regular" opacity attribute.
// Set button colour & Text colour
ion-button[disabled] {
--ion-color-base: var(--light-gray) !important;
--ion-color-contrast: var(--gray) !important;
opacity: 1;
}
Just right click on Text Box -> Expression and write down this
="Page " & Globals.PageNumber & " of " & Globals.TotalPages
Its done.
Note that the JdbcTemplate is not exactly a Strategy, either. In the Strategy pattern the Strategy is an instance variable. I wonder if it's useful to give names to each and every variant of a pattern - and it seems that library creators don't care too much.
I guess the creators of Spring though of the idea behind the template more than about the technical details. The very name of the template method pattern means that its core idea that we have a fixed sequence of actions which is already coded, and a variant part which we want to create and design later.
The use of inheritance is probably somehow a technical detail. The original version (designed for C++, which did not distinguish interfaces from abstract classes), uses inheritance. The JdbcTemplate passes an interface instead. In a way, what is the important and relevant part :
You need to enable systemd service;
sudo systemctl enable amazon-cloudwatch-agent.service
The merchant ID should be an account ID. So, just create the payout with it.
map((ob: any) => {return ob.data.yourKey}
will return the object/array in the key location to the stream
won't this be much simplier?
def validate_string(string):
spl = string.split(',')
return all(word.count('=') == 1 for word in spl)
As of now (2024) I find that devcontainers just work out of the box with the host ssh-agent. I just installed the ssh-agent plugin with zsh and it works with my host ssh credentials when I make a commit. I have not tried the gnupg keys yet.
How exactly it works, I have no clue.
After Upgrading every package will resolve this issue. In my case when I upgrade packages the error not produce again.
The issue for me was that I was using a shebang that bypassed the pyenv:
#!/usr/bin/python3
You should always use:
#!/usr/bin/env python3
If you have more than one deploy project, it is not recommended to delete all the content of "applications".
Changing the operating system (OS) on your Huawei Nova 9 involves replacing the default HarmonyOS (or EMUI, depending on the region) with another OS, typically a custom Android ROM or another operating system like LineageOS, AOSP, or similar alternatives. However, doing so requires a deep understanding of the process, as it can void your warranty, cause data loss, and even "brick" your device if not done properly.
Huawei devices have locked bootloaders and no official support for custom ROMs, and you also face the challenge of limited access to Google services due to ongoing restrictions. But it’s still possible to change the OS, provided you understand the risks involved.
this probably isnt the answer youre looking for, but for whatever reason, just removing the JAVA_HOME environment variable altogether solved this error for me
how did you resolve this issue?
So it sounds like you need to essentially duplicate the contents of a document library from one SharePoint site into another, excluding PDFs. The setup you have now is good all you really need to do is add another condition that checks whether the items is a folder or not, and then create the file/folder appropriately. Luckily, a recursive solution is not necessary.
The following is a general description of a flow that will copy all files (including folder structure and excluding PDFs) from one SharePoint site's document library to another SharePoint site's document library:
<the name of the document library you are copying files from>
<the name of the document library you are copying files to>
<the URL of the site you are copying files from>
strTemplateLibraryName
outputs('Get_files_(properties_only)')?['body/value']
items('Apply_to_each')?['{IsFolder}'] is not equal to true
The false branch of your Condition will be all iterations where the item is a folder and should have this structure:
items('Apply_to_each')?['{FullPath}']
last(split(outputs('FullFolderPath'), variables('strTemplateLibraryName')))
<the URL of the site you are copying files to>
<the document library you are copying files to>
outputs('FolderPath')
The true branch of your Condition will be all iterations where the item is a file and should have this structure:
items('Apply_to_each')?['{FullPath}']
first(split(last(split(outputs('FullFilePath'), variables('strTemplateLibraryName'))),item()?['{FilenameWithExtension}']))
<the URL of the site you are copying files to>
items('Apply_to_each')?['{Identifier}']
<the URL of the site you are copying files to>
/variables('strTargetLibraryName')outputs('FilePath')
items('Apply_to_each')?['{FilenameWithExtension}']
body('Get_file_content')
When I made this flow a while back, I was referencing this guide that you might find helpful. It is difficult to write out flows on here so please let me know if you have any questions.
Here are some basic code examples for AI-related tasks:
Python Codes
1. Chatbot using NLTK and Tkinter
import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import tkinter as tk
from tkinter import messagebox
Tokenize and stem input
def tokenize_stem(input_string):
tokens = nltk.word_tokenize(input_string)
stemmed_tokens = [stemmer.stem(token) for token in tokens]
return stemmed_tokens
Chatbot response
def respond(input_string):
# Basic response logic
if "hello" in input_string:
return "Hello! How can I assist you?"
else:
return "I didn't understand that."
Create GUI
root = (link unavailable)()
root.title("Chatbot")
Create input and output fields
input_field = tk.Text(root, height=10, width=40)
output_field = tk.Text(root, height=10, width=40)
Create send button
def send_message():
input_string = input_field.get("1.0", tk.END)
tokens = tokenize_stem(input_string)
response = respond(input_string)
output_field.insert(tk.END, response + "\n")
send_button = tk.Button(root, text="Send", command=send_message)
Layout GUI
input_field.pack()
send_button.pack()
output_field.pack()
root.mainloop()
2. Simple Neural Network using Keras
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
Create dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
Create neural network model
model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Train model
model.fit(X, y, epochs=1000, verbose=0)
Make predictions
predictions = model.predict(X)
print(predictions)
3. Basic Machine Learning using Scikit-learn
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
Load iris dataset
iris = load_iris()
X = iris.data
y = iris.target
Split dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
Create logistic regression model
model = LogisticRegression()
Train model
model.fit(X_train, y_train)
Make predictions
predictions = model.predict(X_test)
print("Accuracy:", accuracy_score(y_test, predictions))
Java Codes
1. Simple AI using Java
import java.util.Scanner;
public class SimpleAI {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.println("Enter your name:");
String name = scanner.nextLine();
System.out.println("Hello, " + name + "!");
}
}
2. Java Neural Network using Deeplearning4j
import org.deeplearning4j.nn.conf.MultiLayerConfiguration;
import org.deeplearning4j.nn.conf.NeuralNetConfiguration;
import org.deeplearning4j.nn.multilayer.MultiLayerNetwork;
import org.nd4j.linalg.factory.Nd4j;
public class JavaNeuralNetwork {
public static void main(String[] args) {
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(123)
.list()
.layer(0, new DenseLayer.Builder().nIn(784).nOut(250).activation("relu").build())
.layer(1, new OutputLayer.Builder().nIn(250).nOut(10).activation("softmax").build())
.pretrain(false).backprop(true).build();
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
}
}
C++ Codes
1. Simple AI using C++
#include <iostream>
#include <string>
int main() {
std::string name;
std::cout << "Enter your name: ";
std::cin >> name;
std::cout << "Hello, " << name << "!";
return 0;
}
2. C++ Neural Network using Caffe
#include <caffe/caffe.hpp>
int main() {
caffe::NetParameter net_param;
net_param.AddLayer()->set_type(caffe::LayerParameter_LayerTypeINNER_PRODUCT);
caffe::Net<float
Is that any good to you,
import numpy as np
foo = np.array([0, 1, 2])
#bar: int = foo[1]
bar: int = int(foo[1])
print(type(bar), bar)
output: <class 'int'> 1
Solution is to downgrade numpy to 1.26.0. That solved my problem. See [[Solved]] Face recognition test failing with correct image
I ran into this today and found the issue to be the site was using the all.min.css without the all.min.js. Once I added the JS, the twitter X icon worked.
Was in the same situation but managed to solve it (but on a linux vm as runner agent). i manage to sole it perfectly by doing this:
# login to az devops
az config set extension.use_dynamic_install=yes_without_prompt
echo $(System.AccessToken) | az devops login --organization "$(System.CollectionUri)"
# get the variable group id
group_id=$(az pipelines variable-group list --project "$(System.TeamProject)" --top ${{ parameters.search_top_n }} \
--query-order ${{ parameters.search_order }} --output table | grep ${{ parameters.variable_group_name }} | cut -d' ' -f1)
# create or update the variable
az pipelines variable-group variable create --project "$(System.TeamProject)" --group-id ${group_id} --name ${{ parameters.variable_key }} \
--value "${{ parameters.variable_value }}" --secret ${{ parameters.is_secret }} --output table || \
az pipelines variable-group variable update --project "$(System.TeamProject)" --group-id ${group_id} --name ${{ parameters.variable_key }} \
--value "${{ parameters.variable_value }}" --secret ${{ parameters.is_secret }} --output table
# logout from az devops
az devops logout
These are some links that make it easier to understand.
Having no permission for updating Variable Group via Azure DevOps REST API from running Pipeline
https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml
hxxps://bot.sannysoft.com - not true! Сheck here hxxps://deviceandbrowserinfo.com/are_you_a_bot, There is no normal solution in public space. There are private techniques
That's 100% transparent but you can see the border a bit.
box-shadow: 0 0 10px rgba(1, 1, 1, 1);
background-color: transparent;
This is not a selenium solution but you can make a request for the service in python and grabbing the content-disposition response header. That will be the name of your download file.
There is a chance the request will get blocked so you might need to play around with request headers to get around that.
Check my article at https://blog.knovik.com/node-auto-deploy-github-actions/
I explained every step including a detailed step by step guide