My god Carlos, thanks so much for this this saves so much time
The feature maps generated by intermediate layers of a model like ResNet50 during supervised training can be considered part of the supervised learning process, though they don't directly correspond to the target labels.
During supervised learning, the optimization of parameters—including those responsible for generating feature maps—is driven by the loss function that evaluates the model’s predictions against the target labels. The feature maps are not explicitly supervised themselves (there are no direct labels for the feature maps), but their representations are indirectly shaped to improve the final classification outcome.
The intermediate layers, including conv5, learn features that are most relevant to the supervised task (image classification in this case). These features emerge as the model adjusts its weights to minimize the supervised loss, meaning the process that generates the feature maps is inherently tied to the supervised training pipeline.
In unsupervised learning, features would be extracted without reference to any labels, relying instead on intrinsic patterns in the data (e.g., clustering or autoencoders).
In supervised learning, the features are optimized to aid the ultimate supervised objective, even though the feature maps themselves are not directly compared to labels.
Since the generation of these feature maps is influenced by the supervised objective, they should be categorized as results of supervised learning.This is true even though there is no direct supervision at the level of individual feature maps, they are a byproduct of the overall supervised optimization process.
According to this reference, the correct usage for these cases id and(not(failed()), not(cancelled()))
: https://blogs.blackmarble.co.uk/rfennell/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/
[![enter image description here][1]][1]
So, the solution then:
jobs:
- ${{ each package in parameters. librariesToBePublished }}:
- job: Publish_package_${{ replace(package.name, '-', '_') }}
displayName: "Publish ${{ package.name }}"
dependsOn:
- ${{ each dep in package.dependsOn }}:
- Publish_package_${{ replace(dep, '-', '_') }}
condition: |
and(
not(failed()),
not(cancelled()),
eq('${{parameters[package.name]}}', true)
)
[1]: https://i.sstatic.net/AJ12GX78.png
I have had the same issue. One option is to use the Shortkeys Extension? Have you tried it?
Install the Shortkeys Extension: https://addons.mozilla.org/en-US/firefox/addon/shortkeys/
Second option is to modify Firefox's userChrome.js and userChrome.css files.
Let me know if this works, or you would like the details about modifying Firefox's files.
There is 'imarith' but nowadays (2024) it looks like you have to install IRAF, and that is a bit much.
To answer it short and easy. Besides readability...no... performance won't be affected. You could use the old way if you need an empty array since that makes it in my opinion a lot cleaner to read but that kind of goes against the principle of an array since they are not dynamic.
Tl;Dr; No
I know this is an old question, but there is a popular "work around" how you can overcome the limitations of Android keystore.
If you need to generate and store any key (e.g. Ed25519, secp256k1), which is not supported by keystore, the usual strategy is to make use of a symmetric wrapper key.
Example: Ed25519 key
Now, whenever you need to access Ed25519 private key, you need to unwrap it first using AES master key, which is held in keystore.
This is the best you can do right now, alternative would be to use 3rd party key management services (AWS KMS, Hashicorp Vault, Azure Key Vault).
Just added the spring validation dependency:
implementation 'org.springframework.boot:spring-boot-starter-validation'
It's worked again.
I used MC.exe from Windows 10 SDK, and binary file have version 5. That is unsupported on Vista/XP. MC.exe from SDK7.1 generates version 3, that works.
The Angular Material Timepicker does not natively support displaying or selecting seconds. The standard Angular Material Timepicker only allows for the selection of hours and minutes.
You could still use a third party library.
Challenges with the Facebook API Review Process
I have encountered significant challenges navigating the Facebook API review process. Having attempted the latest Instagram Basic Display API seven times, each time receiving unconstructive feedback. Despite providing clear and precise instructions, the issues persist.
For instance, during the review, I created a brand-new Instagram account with Two-Factor Authentication (2FA) enabled. I generated backup codes and shared them explicitly, along with detailed instructions. However, the feedback indicates a lack of adherence to the process—the reviewers might not even be using the provided backup codes and instead improperly force a code into the 2FA field. And they also use their own Instagram account for the test (according to their feedback), despite a boldly written note telling them that only the test Instagram account has been white-listed for the test.
A concerning aspect of this process seems to be the outsourcing of reviews to under-trained teams, often based in locations such as the Philippines. This can lead to inconsistencies, and approvals often feel like a matter of luck rather than merit. Similar issues have occurred in my past interactions with Facebook’s review processes.
When faced with such challenges, my fallback has been engaging Facebook’s support team. However, this is only effective if you have an active ad account, as it seems to be the only way to capture their attention. In the past, their "manual reviews" have been instrumental in resolving similar issues, and I’ve now resorted to seeking their assistance for my stalled Instagram API review on my seventh attempt.
Facebook can and should improve this process to provide a more efficient and reliable experience for developers and businesses alike.
For those facing similar challenges, you can try the support link below (active ad account required): https://www.facebook.com/business-support-home
Best of luck with the Facebook team!
If you are using pg_bouncer, you can refer to this document:
The reuse of prepared statements has one downside. If the return or argument types of a prepared statement changes across executions then PostgreSQL currently throws an error such as:
ERROR: cached plan must not change result type You can avoid such errors by not having multiple clients that use the exact same query string in a prepared statement, but expecting different argument or result types. One of the most common ways of running into this issue is during a DDL migration where you add a new column or change a column type on an existing table. In those cases you can run RECONNECT on the PgBouncer admin console after doing the migration to force a re-prepare of the query and make the error go away.
I finally found the isssue. We had a PreReceived hook on Github that would check if branch names are created with a specific prefix. It was preventing github silently from creating the branch for the merge queue (gh-readonly-queue/*)
After allowing the prefix, everything was working fine
PS: this issue also occurs if you use the rulesets feature in github to ensure specific branch name, so you will need to add the prefix in this feature too
were you able to figure out this out? I can't seem to get it working on my end.
I'm converting mp4 format to m3u8 format from the internet. I'll upload it to the backende panel and pull it to the application from there, but I don't understand how to upload it. It divides the videos into short videos, do I need to upload all of them and define the m3u8 format file?
In modern tmux:
bind t send-keys c-m '~.'
@albert,
Your question is several years old but I am nevertheless posting this link which will may be help you and give you some ideas for adding a smooth effect on details elements.
This is about wordpress but I think it can be also usefull in your case.
I also wanted to add a smooth effect on details elements and I came across this site while doing research. With some adaptations I was able to create a nice smooth effect on my details elements.
Did you test this with -e -flag? I think that should work.
Such as echo -e "storePassword=test\n keyPassword=test\n keyAlias=test\\n storeFile=./upload-keystore.jks" > MyFile.txt
idk why im doing everthing it is just not doing it
its saying
dpkg: error processing package linux-image-amd64 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: linux-image-6.1.0-28-rt-amd64-unsigned broadcom-sta-dkms linux-image-6.1.0-28-amd64
linux-image-amd64
@M.Deinum's comments (despite being patronizing) were spot on. In addition to the lost source, they also evidently made changes to the build and I was building with the wrong libraries. Correcting that fixed my issue.
So, after a lot of investigation of this issue what I found is that I was referencing wrongly the spell-checker-worker.js on my main function that sets all the options for the DevExpress RichEdit component editor.
The solution for the spell-checker to function properly was to rewrite the function that loads all the initial options for the editor like this manner:
function loadOptionsToInitializeContainer() {
const options = DevExpress.RichEdit.createOptions();
options.confirmOnLosingChanges.enabled = true;
options.confirmOnLosingChanges.message = 'Are you sure you want to perform the action? All unsaved document data will be lost.';
options.width = '1100px';
options.height = '1100px';
options.bookmarks.visibility = true;
options.bookmarks.color = '#ff0000';
options.fields.updateFieldsBeforePrint = true;
options.fields.updateFieldsOnPaste = true;
options.rangePermissions.showBrackets = true;
options.rangePermissions.bracketsColor = 'red';
options.rangePermissions.highlightRanges = false;
options.rangePermissions.highlightColor = 'lightgreen';
options.handled = false;
// here is the function which enables the spell checking utility on the editor
enableSpellChecker(options, options.enableSpellChecker);
var contextMenu = options.contextMenu;
var reviewTab = new DevExpress.RichEdit.RibbonTab();
var ribbonButton = new DevExpress.RichEdit.RibbonButtonItem("addWordToDictionary", "Add word to dictionary", { icon: "check", showText: true, beginGroup: true });
reviewTab.insertItem(ribbonButton, 16);
reviewTab.id = 16;
reviewTab.localizationId = "Spellchecking tab";
reviewTab.title = "Spellchecker";
options.ribbon.insertTab(reviewTab, 16);
var mailMergeTab = options.ribbon.getTab(DevExpress.RichEdit.RibbonTabType.MailMerge);
options.ribbon.removeTab(mailMergeTab);
var tab = options.ribbon.getTab(DevExpress.RichEdit.RibbonTabType.Insert);
var mailMergeTab = options.ribbon.getTab(DevExpress.RichEdit.RibbonTabType.MailMerge);
var tabHeadersFooters = options.ribbon.getTab(DevExpress.RichEdit.RibbonTabType.HeadersFooters);
var fileTab = options.ribbon.getTab(DevExpress.RichEdit.RibbonTabType.File);
var ribbonItemFooter = tab.getItem(DevExpress.RichEdit.InsertTabItemId.InsertFooter);
var ribbonItemHeader = tab.getItem(DevExpress.RichEdit.InsertTabItemId.InsertHeader);
var ribbonItemPageNumber = tab.getItem(DevExpress.RichEdit.InsertTabItemId.InsertPageNumberField);
var ribbonItemHeadersFooters = tabHeadersFooters.getItem(DevExpress.RichEdit.HeaderAndFooterTabItemId.ClosePageHeaderFooter);
// gets Home Tab
var fileItemSave = fileTab.getItem(DevExpress.RichEdit.FileTabItemId.ExportDocument);
// gets Save Option from Home Tab
// Removes Save item from Home Tab
fileTab.removeItem(fileItemSave);
tab.removeItem(ribbonItemFooter);
tab.removeItem(ribbonItemHeader);
tabHeadersFooters.removeItem(ribbonItemHeadersFooters);
var richElement = document.getElementById("rich-container");
return [richElement, options];
}
function enableSpellChecker(options, enableSpellChecker) {
let boolValue = enableSpellChecker.toLowerCase() == 'true' ? true : false;
options.spellCheck.enabled = boolValue;
options.spellCheck.suggestionCount = 5;
options.spellCheck.checkWordSpelling = function (word, callback) {
if (!spellCheckerWorker) {
var myDictionary = JSON.parse(localStorage.getItem('myDictionary')) || [];
// this is where the error was, I was clearly pointing out a wrong directory for the worker to properly function.
spellCheckerWorker = new Worker('/Scripts/devexpress-richedit/spell-checker-worker.js');
spellCheckerWorker.onmessage = function (e) {
var savedCallback = spellCheckerCallbacks[e.data.id];
delete spellCheckerCallbacks[e.data.id];
if (e.data.suggestions != undefined && e.data.suggestions.length > 0) {
savedCallback(e.data.isCorrect, e.data.suggestions);
} else {
savedCallback(e.data.isCorrect, myDictionary);
}
};
}
var currId = spellCheckerWorkerCommandId++;
spellCheckerCallbacks[currId] = callback;
spellCheckerWorker.postMessage({
command: 'checkWord',
word: word,
id: currId,
});
};
options.spellCheck.addWordToDictionary = function (word) {
var myDictionary = JSON.parse(localStorage.getItem('myDictionary')) || [];
myDictionary.push(word);
localStorage.setItem('myDictionary', JSON.stringify(myDictionary));
spellCheckerWorker.postMessage({
command: 'addWord',
word: word,
});
};
}
When I set up a demo page for checking out what the issue was is that the spell-checker-worker.js
on my Network Tab in Google Chrome was giving me a 200 result. It was properly functioning as it is supposed to.
After this, I was convinced that in my other main page, where the spellchecker is malfunctioning, there had to be something that is not correct there. And effectively, the problem was that the worker javascript file was referenced with a wrong relative path.
Instead of this which is the correct way to go:
spellCheckerWorker = new Worker('/Scripts/devexpress-richedit/spell-checker-worker.js');
I had it like this manner:
spellCheckerWorker = new Worker('./spell-checker-worker.js');
If to anyone has occurred a similar issue to this, please check this answer so it will save you any headaches as regards the corresponding matter.
When I translated it correctly, it says that the package recipes
is missing. Install the package with install.packages("recipes")
and try again.
In Java, all non-static, non-private methods are virtual by default. This means they can be overridden in subclasses. Java does not explicitly use the "virtual" keyword like C++, as method overriding is an inherent feature of inheritance. For More details about this read this blog : Virtual Function In Java | Run-Time Polymorphism In Java
Using cra , It will default create and install dependencies in directory , where you run npx create-react-app, you can delete delete node_modules folder, and others things and it will use packages from root nodu_modules. In monorepo, some you use special packages for spacific project, it dont have sense to use it globaly.Answer is not using cra, create basic templates for base projects(folder structure, scripts, modified package-json without global dependecies) and then run npm i/ci and start project.
I'm getting the same behavior. I looked at the plotly code and they seem very aware of this. The .show
method seems to write a temporary html file and then uses an iframe html code which is displayed to visualize the temporary file.
You'll have to bypass FLAG_SECURE with a rooted device or third party app. If you want to use an app instead so you don't have to bypass FLAG_SECURE then I recommend Tasker or AutoInput as they have worked for me in the past.
I was facing the similar issue today, i pasted the public key of the server in the authorized key section, by doing this the issue was resolved for me. Documentation followed: https://plugins.jenkins.io/publish-over-ssh/ enter image description here
According urllib3 v2.0.0 release notes, the support for OpenSSL versions earlier than 1.1.1 are not supported anymore.
Check the releases page for the latest version prior 2.0.0, e.g. 1.26.20
Oh! When I did this in My Paginator function's parameter then its showing the next Items after the Last Index but not working When I add limit to the Next Items? and I am using kotlin.
lastItem: DocumentSnapshot? = null
and When I remove the null operator and null then its Showing the Next Items in the Whole List and working with limit but not showing after last Index?
To debug the issue, I have two suggestions for you:
please check whether the API expects the word "Token" before the actual token. Some APIs expect "Bearer" before the actual token. {.header("Authorization", "Bearer d2618176b9b9aed6dc0a9cb3a1ebfe1c4c8831ed999bdce4432e061aa56f672f")}
Log your request by enabling logging ( log().all()) in rest assured to debug your request and verify that the headers and body are being sent correctly. Resource
For whom it may be useful, now you can do this paying attention of last index:
if (!$loop->last) {
$collection->get($loop->index + 1)
}
There would be two possible reasons.
ant clean all
command.Executing setantenv.bat
before executing ant clean all
command solved error in my case.
Did you find the issue?
I found that com.google.mlkit:playstore-dynamic-feature-support:16.0.0-beta2
would pull in a a version of com.google.android.play:core
Need to find a replacement for it.
This solution works great! Any ideas how to adapt it to sync topics completion across courses instead of lessons?
building upon what @Snorik wrote, just want to clarify terminology respecting Python:
Python, by default, does not support method overloading. So you either can: (a) "cheat" your way with if-elif-else, (b) use default arguments none or (c) use the decorator dispatch. For more info and code examples on the decorator check the following link (https://www.geeksforgeeks.org/python-method-overloading/)
Overriding is what happens in Python by default. That is exactly what is happening in your first example of Python code. Now overriding is useful when there are multiple classes sharing the same method name, or when a parent method isn't suited for the needs of a child class and requires modification. Check this link for further examples and code (https://www.w3schools.com/PYTHON/python_polymorphism.asp).
Now for a comparison of Java and Python Polymorphism check the table.
| | Python | Java
| operator overloading | yes | no
| (+, -, * :) | within each class |
| | using __add__ |
| | for +, __sub__ for |
| | - and so on |
| method overloading | no (by default) | yes, by default
| | workaround with |
| | the @dispatch |
| | decorator |
| method overriding | yes, by default | not by default, but in
| | | the context of child
| | | classes that override
| | | inherited methods from
| | | parent classes
Hope this helps.
The question was about django messages + toast, not alerts. I don't understand why everyone is referring to alerts.
in my case i had to imlement this to my build.gradle (:app) file :
implementation 'androidx.fragment:fragment-ktx:1.6.1'
It looks like something strongly complicated but isn't so I will try to explain the common situation. I think it can be occurred for a lot of reasons:
The next step is to describe how to fix it all. Many things will be different depending on the version you currently use and the type of platform. In my case, it's Win 11 and IntelliJ 2024 2.4.
**Very important if something is wrong continue to investigate the current issue a little bit deeply in the setting or project(environment) configuration. Remember it helps me and I'm not sure that it's help for everyone. I hope you will deal with it! **
Repair solved the issue for me.
I know this is old but I'm answering for the record for if someone is having this issue.
You probably just forgot to write the main method in main.dart
Just add:
void main() => runApp(const MyApp());
before the class.
Please run the below command line:
winpty gh auth login
That's all, enjoy.
For some reason works for me and it is not documented in: vscode keyboard shortcuts
You don't need to use nginx to check if your golang application is listening. While you are at same host as your application - check it directly.
Attach logs from terminal where you start your golang application. It seems it's not runing at all. So - may be it was written with errors, may be require some args or envs and so on.
Probably way too late to the party but give JustRun a go?
val savedStateHandle = mockk<SavedStateHandle>()
justRun { savedStateHandle["YOUR KEY"] = "Something" }
verify { savedStateHandle.set("YOUR KEY", "Something") }
I have the same problem but I have no password set on my computer so I press enter but it says incorrect password. I had to set a password (temporarily) and then it worked. ( on mac btw )
In My Case I have to verify the account by adding the phone number and got the the buttons under settings
If you are using docker compose with a network, change the url from localhost to the container name, e.g if your container is called backend on port 8080, it should be http://backend:8080 instead of http://localhost:8080 because of the network you establish in the compose.
Try using define leaflet.markercluster.js in top of your js file.
Like this:
define([
'https://cdnjs.cloudflare.com/ajax/libs/leaflet.markercluster/1.5.3/leaflet.markercluster.js'
], function () {.....
This code solve same issue.
your server has to be stateless. but it has a state. either you use redis and have your state over there or change your application logic.
fada sadasd asda ad asd d dsad asd asd asd ad asd asd asdasda fdsfsdgfsgsdffgsdgsg
An addition for a collection of owned properties:
// Parent entity
public class Person
{
public Guid Id { get; set; }
public string Name { get; set; }
public ICollection<Address> Addresses { get; set; }
}
// Owned Type
public class Address {
public string Street { get; set; }
public string Number { get; set; }
}
...
// Configuration
public class PersonConfiguration : IEntityTypeConfiguration<Person>
{
public void Configure(EntityTypeBuilder<Person> builder)
{
builder.OwnsMany(person => person.Addresses);
}
}
...
// On Address (owned property) modified:
var modifiedEntities = _dbContext.ChangeTracker.Entries<Person>()
.Where(x => x.State == EntityState.Modified)
.Select(p => p.Entity);
modifiedEntities = modifiedEntities.Concat(GetParentEntitiesOfModifiedOwnedEntities(modifiedEntities.Select(e => e.Id)));
The method for retrieving the parent/owner entities:
/// <summary>
/// Retrieve parent/owner entities of the modified Address entities.
/// </summary>
/// <param name="addedAndModifiedEntityIds">Ids of added and modified entities</param>
private IEnumerable<Person> GetParentEntitiesOfModifiedOwnedEntities(IEnumerable<Guid> addedAndModifiedEntityIds)
{
var modifiedAddressEntities = _dbContext.ChangeTracker.Entries<Address>()
.Where(p => p.State == EntityState.Modified || p.State == EntityState.Added || p.State == EntityState.Deleted);
/* An owned entity (Address) has two shadow properties:
* - Id: the internal unique id of the owned entity (here an int)
* - 'external id': the id of the parent/owned entity (here a Guid)
*
* To retrieve the second one, we search for the shadow property which doesn't have the name "Id"
*/
var linkedMemberIds = modifiedAddressEntities
.Select(e => Guid.Parse(e.Members.First(m => m.Metadata.IsShadowProperty() && m.Metadata.Name != "Id").CurrentValue.ToString()))//get the parent entity id
.Except(addedAndModifiedEntityIds)
.ToHashSet();
if (linkedMemberIds.Any())
return _dbContext.ChangeTracker.Entries<Person>()
.Where(e => linkedMemberIds.Contains(e.Entity.Id))
.Select(p => p.Entity);
return [];
}
No clue why, but on this server something is different. I had to prefix all my Go routes with /api/. So it is working now.
In previous projects I didn't need to add that prefix.
Depending on which aws cli version (v1 vs v2) you use, you need to consider the cli-binary-format option and its default:
The formatting style to be used for binary blobs. The default format is base64. The base64 format expects binary blobs to be provided as a base64 encoded string. The raw-in-base64-out format preserves compatibility with AWS CLI V1 behavior and binary values must be passed literally. When providing contents from a file that map to a binary blob fileb:// will always be treated as binary and use the file contents directly regardless of the cli-binary-format setting. When using file:// the file contents will need to properly formatted for the configured cli-binary-format.
Documentation: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/invoke.html
AFAIK, there is no way to do this. F# is not supported by Roslyn, so you will need to do everything including integration with VS yourself.
You can look how the parsing of text is done with this SO question: Parse a text-string to F#-code
I would also look into existing F# projects like Fantomas to see how they parse code:
I had to check mongo container's logs and provide a MONGODB_REPLICA_SET_KEY, now it all works.
since f is defined as void, it doesn't do anything originally and won't return anything.
However, when the lambda function is set to return int, undefined behavior will occur.
Yes, Once you use a model once, it gets downloaded into your cache directory, meaning that next time you call the model even without internet connection it can be loaded as well.
your default cache directory is ~\.cache\huggingface\hub
you can use scala-compress
https://github.com/gekomad/scala-compress
I assumed that id is the name of the data.frame you provided at the beginning of the question (correct me if otherwise). Could something like this work maybe?
result_earliest_adjusted <- id %>%
mutate(DATE_TIME = as.POSIXct(paste(DATE, TIME), format='%Y-%m-%d %H:%M:%S')) %>%
group_by(MANUAL.ID, id, DATE.12) %>% # group by species, id, and sampling night
arrange(DATE_TIME, .by_group = T) %>% # sort by occurrence
filter(row_number()==1) %>% # take the first for each group
rename(Earliest_datetime = DATE_TIME)
result_earliest_adjusted
# A tibble: 1 × 7
# Groups: MANUAL.ID, id, DATE.12 [1]
DATE MANUAL.ID TIME HOUR DATE.12 id Earliest_datetime
<fct> <fct> <fct> <int> <fct> <chr> <dttm>
1 2024-08-05 EPTSER 23:55:56 23 2024-08-05 id_1 2024-08-05 23:55:56
I wrote the same example script in native tcl expect, and there it behaves differently (and as intended).
The tcl expect example script:
#!/usr/bin/expect -f
exp_internal 1
log_user 0
set pattern {[\r\n]+[^\r\n<]+[#>%] ?$}
# spawn process (here: open serial console via picocom):
spawn picocom --baud 9600 --flow n /dev/ttyS8
# wait for output of spawned process to settle down:
expect -timeout 2 -re {.+} exp_continue
# send 2x newline ("enter"):
send "\n\n"
expect -re $pattern
send_user "Anything left in the buffer?\n"
expect -timeout 2 -re {.+} exp_continue
send "\x01\x11"
expect "Thanks for using picocom"
close
Relevant debug output:
send: sending "\n\n" to { exp4 }
Gate keeper glob pattern for '[\r\n]+[^\r\n<]+[#>%] ?$' is ''. Not usable, disabling the performance booster.
expect: does "" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{mas" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\n" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\nroot@rou" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\nroot@router> \r\n\r" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\nroot@router> \r\n\r\n{master" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\nroot@router> \r\n\r\n{master:0}\r\nroo" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\nroot@router> \r\n\r\n{master:0}\r\nroot@router" (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=no
expect: does "\r\n\r\n{master:0}\r\nroot@router> \r\n\r\n{master:0}\r\nroot@router> " (spawn_id exp4) match regular expression "[\r\n]+[^\r\n<]+[#>%] ?$"? (No Gate, RE only) gate=yes re=yes
expect: set expect_out(0,string) "\r\nroot@router> "
expect: set expect_out(spawn_id) "exp4"
expect: set expect_out(buffer) "\r\n\r\n{master:0}\r\nroot@router> \r\n\r\n{master:0}\r\nroot@router> "
Anything left in the buffer?
Gate keeper glob pattern for '.+' is ''. Not usable, disabling the performance booster.
expect: does "" (spawn_id exp4) match regular expression ".+"? (No Gate, RE only) gate=yes re=no
expect: timed out
The expect_out(buffer)
is the equivalent of the concatinated "Before" and "match" in the perl's version debug output; in contrast to the perl version, the match result contains everything up to the anchor at end-of-string, and not only the match up the first end-of-line; the rest of the buffer is empty, thus the timeout of the following expect .+
call.
I think my initial problem is a bug in the perl expect I currently use.
~$ dpkg -l|egrep expect
ii expect 5.45.4-2+b1 i386 Automates interactive applications
ii libexpect-perl 1.35-2 all Perl Expect interface
ii tcl-expect:i386 5.45.4-2+b1 i386 Automates interactive applications (Tcl package)
~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
~$
What endpoint did you use to fix it
Step 1: import dart:ui as ui
import 'dart:ui' as ui;
Step 2: Wrap the container behind where you want to add blur effect, adjust sigma for increasing or decreasing blur effect.
ClipRRect(
borderRadius: BorderRadius.circular(25),
child: BackdropFilter(
filter: ui.ImageFilter.blur(
sigmaX: 4,
sigmaY: 4,
),
child: Container(
width: 40.uw,
height: 206.5.uh,
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(25),
),
),
)),
Output: I have kept color transparent the example output
In PhpMyadmin I found that clicking "Change all column collations" wasn't working in some instances. Instead I had to change the table collation first to something different, then back to the one I wanted.
Make sure "Change all column collations" is clicked in both operations and it should work!
This Approach is Showing the Next Documents Correctly But Forgot the Past Documents and Showing the Next Documents in the Whole List?
filterQuery.startAfter(lastDocument!!)
.limit(3).get()
An integration test is testing of different modules or components working together. A regression test is testing previously tested features to ensure that new changes have not introduced new bugs in the existing functionalities.
For example, you are testing the interaction between a payment gateway and your e-commerce application; this is called an integration test. After fixing a bug in price calculation logic, you need to verify that the fix did not affect other parts of the system; this is called a regression test.
There is a new API for that. It intercepts back button calls on Android natively. Only available in Chrome for now
https://developer.mozilla.org/en-US/docs/Web/API/CloseWatcher
i got a couple of issues while working with next-pwa so i just start using serwist and everything looks fine
Need to edit the handler.
Code section Runtime settings Edit Change the Handler Path
for me it worked and got the results.
When your MongoDB connection string password contains special characters (like @, !, #, etc.), you need to URL encode the password. Use encodeURIComponent() to properly encode the password:
const password = 'password-with-special@characters'; const encodedPassword = encodeURIComponent(password);
const connectionString = mongodb+srv://username:${encodedPassword}@cluster.mongodb.net/database
;
You can checkitout here, for more info: https://www.mongodb.com/docs/atlas/troubleshoot-connection/#incorrect-connection-string-format
You need to install 17 version jdk and add path here: Go to File > Settings Navigate to Build, Execution, Deployment > Build Tools > Gradle
than restart android and run flutter build apk
UPDATE 2
After update my karma.conf.js file like this :
preprocessors: {
'src/app/**/*.spec.ts': ['webpack'],
'src/**/*.js': ['webpack']
},
webpack preprocess test files but now I have an error related to the loader :
ERROR in ./src/app/app.component.spec.ts 17:13
Module parse failed: Unexpected token (17:13)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
| describe('AppComponent', () => {
|
> let fixture: ComponentFixture<AppComponent>;
| let modalService: NgbModal;
| let modalRef: NgbModalRef;
ERROR in ./src/app/email-checker-api.service.spec.ts 7:13
Module parse failed: Unexpected token (7:13)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
|
| describe('EmailCheckerApiService', () => {
> let service: EmailCheckerApiService;
|
| beforeEach(() => {
ERROR in ./src/app/email.service.spec.ts 8:13
Module parse failed: Unexpected token (8:13)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
|
| describe('EmailService', () => {
> let service: EmailService;
|
| beforeEach(() => {
ERROR in ./src/app/form/form.component.spec.ts 9:15
Module parse failed: Unexpected token (9:15)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
|
| describe('FormComponent', () => {
> let component: FormComponent;
| let fixture: ComponentFixture<FormComponent>;
|
ERROR in ./src/app/message.service.spec.ts 6:13
Module parse failed: Unexpected token (6:13)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
|
| describe('MessageService', () => {
> let service: MessageService;
|
| beforeEach(() => {
1 error has detailed information that is not shown.
Use 'stats.errorDetails: true' resp. '--stats-error-details' to show it.
I installed the ts-loader
package and removed node_modules and reinstall it.
No luck, the package seems not to be loaded by thee webpack config
If you want to back to the previous page,you should do:
history.go(-1);
OP has a serious point here.. Here's another way this creates a problem for the everyday Edge Browser user.
Go to your Banks log in page where they ask for your Username and Password. Edge will autofill what you used to log in last. In my case there is 2 separate credentials, one for me and one for the wife..
So when I try to select a different credential than last selected, you can see how Edge very briefly updates the 2 edit boxes to what youve selected, but then switches back after a few milliseconds later with the prior credentials.. If you are like me and select an account and quickly press enter, then you get logged into the account you DIDNT want to login to..
If that doesnt happen, and after a few seconds I see Edge kept the credentials I selected in the edit boxes, then go to the login button on the page form, there is a 50/50 chance that the moment when I click that button, I CAN PHYSICALLY SEE Edge load the prior login data I DID NOT WANT, and log me into the wrong account..
Both times I would have to sign back out, and log back in, this time EDGE REMEMBERS the account I actually wanted to login to that I selected as last logged in and fills the edit boxes on the form. Without me having to touch the boxes, now I can just click on the login button AND GET INTO MY BLOODY BANK ACCOUNT!!!
This has been going on literally for months, if not years.. I dont log in to different credentials very often, so this is a nuisance once in a while.. Like that Fly that keeps coming back to sit on your neck... Thats "Internet Edgeplorer" for you.
Googling this behavour got me here, time for googling is now over.. Its time to talk about this and get it sorted, as well as how something like this goes on for so long without being detected by Microsoft. We should start a separate conversation about how Microsofts quality mechanisms miss to detect bugs like these.
If you want to use relative path and you has used SpringBoot,you should do:
URL resource=this.getClass().getClassLoader().getResource("test.txt");
File file = new File(resource.getPath());
For PowerShell or GitLab Runner shells, you may need to use double quotes around properties to ensure it is treated as a single argument in the console.
mvn test "-Dsurefire.rerunFailingTestsCount=2" -Dtest="KafkaTestSuite"
maven-surefire-plugin version 3.5.0 with JUnit 5
For me, disabling my Dev Express CodeRush extension solved this.
This is not an answer but I will try to stop you 😂. You should avoid wrapping the logging code at any cost, because:
logger.warning("\(message, privacy: .public)")
will be truncated if your message is too long.logger.log("She said \(true, format: .answer)")
will print She said YES
logger.log("She said \(true, format: .truth)")
will print She said true
.In case you find it brings more advantages to public interpolated string as default, you can try to extend OSLogMessage
.
There is a proposal to allow optional chaining assignments which would allow you to do:
obj?.someProp = 42;
It won't check for existence of someProp
, just obj
.
Proposal: https://github.com/tc39/proposal-optional-chaining-assignment
It is implemented in babel, so you could use that already if transpiling.
This question stemmed from me looking in the wrong place. I'm leaving it below for someone to meet their needs (just in case).
from kivy.lang import Builder
from kivy.metrics import dp
from kivymd.app import MDApp
from kivymd.uix.menu import MDDropdownMenu
KV = '''
MDScreen:
MDRaisedButton:
id: button
text: "Press me"
pos_hint: {"center_x": .5, "center_y": .5}
on_release: app.menu_open()
'''
class Test(MDApp):
def menu_open(self):
menu_items = [
{
"text": f"Item {i}",
"on_release": lambda x=f"Item {i}": self.menu_callback(x),
} for i in range(5)
]
MDDropdownMenu(
caller=self.root.ids.button, items=menu_items
).open()
def menu_callback(self, text_item):
print(text_item)
def build(self):
self.theme_cls.primary_palette = "Orange"
self.theme_cls.theme_style = "Dark"
return Builder.load_string(KV)
Test().run()
More detailed data is in the documentation: enter link description here
The type of the date must be a string (for the calendar component)
I use the type Date. That must be converted to a string in the given format.
so I used mm/yy (primeNG formatting) which translates to "12/2024"
import { DatePipe } from '@angular/common';
constructor(private datePipe: DatePipe) { }
ngOnInit(): void {
this.date = this.datePipe.transform(new Date(), 'MM/yyyy');
}
The solution can be that simple :)
How can we submit a value that we want to search.
Looks like the separator is a dot .
rather than slash /
Your solution is:
imapcon.uid('COPY',emailid, 'Inbox.test1in')
I ran into a similar issue. Just make sure serverless:
is set to true and the schema will be correct:
sink:
- opensearch:
hosts:https://a123.us-east-1.aoss.amazonaws.com
aws:
sts_role_arn: arn:aws:iam::123:role
region: us-west-1
serverless: true
For PHP I used this:
"editor.codeLens": true,
"php.codeLens.enabled": false,
"editor.suggest.showReferences": false,
"editor.codeLens": false was removing too much stuff, like accepting git conflict resolutions.
You could leverage Inertia's partial reloads in tandem with the navigate
event to work around this issue. Something like the following might work:
import { router } from '@inertiajs/vue3'
// Listen for navigation through history
router.on('navigate', () => {
// Reload only the props required
router.reload({ only: ['cart'] });
});
See this : Available Options
Also look for better tools than tkinter, there are better GUI libraries in python that natively support such things.
I found a solution... Just need to add what is listed here https://github.com/prettier/eslint-plugin-prettier?tab=readme-ov-file#configuration-new-eslintconfigjs
const eslintPluginPrettierRecommended = require('eslint-plugin-prettier/recommended');
module.exports = [
// Any other config imports go at the top
eslintPluginPrettierRecommended,
];
const withPWA = require('next-pwa')
module.exports = withPWA({
pwa: {
disable: process.env.NODE_ENV === 'development', //make sure to disable PWA on development
register: true,
scope: '/app' }
})
There is no SAP Cloud SDK monitoring API for emails.
If you want to collect metrics, such as email success/failure rates, you will have to loop over the responses from each mail and aggregate the data yourself. Each sendMail call will return a mail response, which you can use for this purpose.
Unfortunatelly, I get error also if I use async_playwright, see https://github.com/microsoft/playwright-python/issues/462#issuecomment-2519869308
I'm unable to add a comment because I don't have 50 reputation.
I have a question.
How do I pass data to the start destination of nested navigation? Without directly calling the first screen as you called the first screen (GroupDetail), my use case wants to pass a string to all screens of nested navigation screens. This means I'm calling GroupDetailGraph(userId = "123") from the route graph and want to give the user ID to all screens of GroupDetailGraph.
if your saying don't call GroupDetailGraph with userId call call every screen with userId then please tell what is actual need of nested navigation Attached is the Google Doc image.
Can you please tell me what I didn't understand from the? google doc
@AliNaddaf The Google docs seems misleading. It reads: "If no authorized Google Accounts are available, the user should be prompted to sign up with any of their available accounts. To do this, prompt the user by calling the API again and setting setFilterByAuthorizedAccounts to false. Learn more about sign up."
ref: https://developer.android.com/identity/sign-in/credential-manager-siwg
So if I understand your reply to Mustafiz012 on 27 Aug correctly, you are saying it is not possible for GetCredentialRequest to generate the UI for users to create a Google account for devices that have NO Google account? The only way is to use something like this?
fun getAddGoogleAccountIntent(): Intent {
val intent = Intent(Settings.ACTION_ADD_ACCOUNT)
intent.putExtra(Settings.EXTRA_ACCOUNT_TYPES, arrayOf("com.google"))
return intent
}
I wonder why Google did not release some sample codes for Sign In with Google using Credential Manager.
From your detailed explanation on the different scenarios, it seems the difference setting setFilterByAuthorizedAccounts to true or false only matters for existing Google accounts and does not apply at all to non existing Google accounts and hence Credential Manager is not providing any means to add Google account. If so, it will be good to clarify this in the docs.
Although I will assume many devices will have Google accounts, as a developer I need to cater to all possible scenarios.
This is possible if you'll use PhpDocExtractor() insted or in addition to ReflectionExtractor().
So to provide information about array key to Serializer all you need is annotate your array property with type description like this:
/** @var array<string, yourValueType> $arrayProperty */
note: that annotation must begin with /** (not /*)
So what PhpDocExtractor does, it parses the source code of your class. Which may not be that efficient as gathering information from reflection. But it definitely will work as you expecting.
PS: My question to the next generations of people is: How to do this without PhpDocExtractor() ?
I was having the same issue... what's the point of including the merchantIdentifier in Expo plugins config then? ie:
[
"@stripe/stripe-react-native",
{
"merchantIdentifier": "merchant.***",
"enableGooglePay": false
}
]
I had similar issue.
My txt file was opening correctly in notepad but in notepad++ It was coming up same as your example.
The solution for UTF-16 BE and UTF-16 LE formats adding special characters to the hex code of txt file before transferring.
For UTF-16BE you need to add 'FFFE' to your begin of the hex code. For UTF-16LE you need to add 'FEFF' to your begin of the hex code.
If you have hex code of TXT file this might be the solution you are looking for.
This fixed my problem.
To ignore application 4xx errors when determining your environment's health. we can simply edit EB health rule that ignores 400-499 HTTP status codes when alerting if your environment instances are having trouble.
It is common for applications to receive many 4xx errors, for example, due to:
Client's API integrations using invalid credentials. Client-side test tools. Broken links that create 404 responses.
To configure HTTP 4xx status code checking using the Elastic Beanstalk console Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.
In the navigation pane, choose Environments, and then choose the name of your environment from the list.
In the navigation pane, choose Configuration.
In the Monitoring configuration category, choose Edit.
Under Health monitoring rule customization, Activate the Ignore options.
To save the changes choose Apply at the bottom of the page.
This will ignore Http4xx request.
follow this docs for better understanding.... https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced-rules.html
I waited for some time and now it works. I think it was DNS propagation issue.
I had 14 of these errors showing on build. Angular 18.
Searched the interweb and not much. So I ran a match on the %s that seemed arbitrary to me and it corresponded to Keyframe declarations in my scss file.
Specifically keyframes that targeted @-moz-keyframes.
Not sure if this vendor prefix has been deprecated, removed them and the build runs clean. Clearly Angular is ignoring them now.
You have to use aws_access_key_id as username and aws_secret_access_key as password.