There is nice example from Apple now for Wallet Extensions (adding card from your app to Apple wallet):
https://applepaydemo.apple.com/wallet-extensions
"To request the private entitlement, as well as allow listing for the issuer app, send the following information to [email protected]."
Unfortunately 9+ years later and CodeLens for razor pages/components still not implemented in Visual Studio.
It seems like your issue is caused by some unusable code left in your components (e.g., Navbar.jsx or other parts of your application). When React attempts to render the app, this unused or invalid code may prevent it from rendering correctly, leading to a blank screen.
Like Below Example enter image description here
A return statement just stops the execution of its respective method immediately and returns the value it was assigned. So yes the first example would work just fine
NOTE: the second example is more readable than the first one because only one return statement is being used.
There's no way to call touch
like pathlib.Path.touch("my_path")
according to the documentation hence there's no way to assert it with touch_mock.assert_any_call(Path(p /"output"/"output_file"))
, it's just pointless. I'll only assert that the method has been called.
I'm sure OP has moved on by now, but answering in case it's useful to others...
Per Github issue and SO answer, it looks like you need to set the timeout directly on the class attribute before instantiating any S3FileSystem
objects:
import s3fs
S3FileSystem.read_timeout = 18000 # five hours
fs = s3fs.S3FileSystem()
Bitbucket having an incident right now. You can see the status here bitbucket.status.atlassian.com
Update - We are still investigating an issue with Bitbucket Web and Git operations that is impacting Atlassian Bitbucket Cloud customers. We will provide more details within the next hour. Jan 21, 2025 - 15:58 UTC
I just have update my system to 7.58 and it's not longer changing the header, does anyone know how to change the response header in this new version?
This depends on what you want to achieve. Custom integration is used to give an application access to your account or projects. If the users you refer to are part of your team, then they don't have do add the custom integration to their accounts. You are however required to add the app to different accounts if the users are not part of the account but your app needs to get data from multiple accounts
This response is misleading and wrong IMHO.
getLocalName() called on a resource is part of the API. The behavior is at least inconsistent even if no serialization is involved.
I.e. for an URI like 'namespace:1' getLocalName() returns an empty result. Same code returns 'a' for 'namespace:a'. Serialization and deserialization is fine for both cases.
If overwriting changes you made in the generated code is an issue, you can put these changes into their own partial class files. These will not be overwritten when re-scaffolding from a changed database. As is also recommended here:
https://learn.microsoft.com/en-us/ef/core/managing-schemas/scaffolding/?tabs=vs#repeated-scaffolding
This one was moved some time ago. This legacy SDK is now hosted at
https://repositories.tomtom.com/artifactory/maps-sdk-legacy-android
These days Kestrel can be exposed to the outside world.
Kestrel is a cross-platform web server. Kestrel is often run in a reverse proxy configuration using IIS. In ASP.NET Core 2.0 or later, Kestrel can be run as a public-facing edge server exposed directly to the Internet.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/?view=aspnetcore-7.0&tabs=windows#servers
You should commit transaction in the try block because this way you can rollback any error caused by the transaction commit itself. If you do transaction commit in the finally block and transaction commit throws error then your code will blow up.
So there is currently no way of getting both the content and the sensitive data in one file ?
The answer turned out quite surprising, and it took an hour with a colleague to understand it: the VC++ environment is so different from any of the MinGW/GCC environments that you have to download an entirely different version of SDL2 for it! Both libraries and headers are different. VC++ is not a POSIX compliant environment, which means in practice that things one might lightly presume to be part of C++ itself, including <unistd.h> and even <strings.h>, are not present.
I had made the mistake of taking an existing folder and installation set up for Visual Studio Code and GCC, and trying to slap a Visual Studio Solution on it.
This issue might not be evident to someone who was accustomed to using a package manager. But no! You wouldn't even use the same package manager! All becomes clearer if you download SDL2 directly from its website. This partial screenshot of their list of download options highlights the one I needed.
This also answers other questions I was struggling with, like why SDL (gcc version) was giving me libraries with the .a extension while CMake (VC++ world) was insisting on looking for libraries with the .lib extension.
The installation notes did not mention any compatibility with Struts 2.
Compatibility
- Struts-Layout 1.0 works with Struts 1.0 and 1.1. However, note that due to an inheritage change between Struts 1.0 and Struts 1.1, Struts-Layout must be recompiled to work with struts 1.0.
- Struts-Layout 1.1 works with Struts 1.1 and 1.2.
- Struts-Layout 1.2, 1.3 and 1.4 works with Struts 1.2 and 1.3. It can also work with Struts 1.1 if you use the special compatibility jar.
https://web.archive.org/web/20111227074645/http://struts.improve-technologies.com/install.html
This library is not available anymore, but if somebody still has the JAR file for Struts-layout 1.4 that could share, I would be appreciated.
This question has bothered me for the better half of the last decade and sometimes I wonder if it's best not to question sorcery but curiosity takes over and I go down the rabbit hole.
I think I really understand the crux, the pain with which you’ve posed the question so I won’t be answering with digital logics but rather with analogies.
Let’s try to invent a hypothetical rudimentary computer with..
Bell’s Telephone…
Marconi’s Radio,
Edison’s Kinetoscope (or any motion picture pre world-war 2)
..and a simple calculator (and remember the word Keystroke. It is one of the major concepts which explain the crux of the software-hardware conversion.)
None of the above archaic inventions have software in them yet we can almost imagine a working computer made out of those parts. Combine the telephone, radio, kinetoscope, calculator (this time with more buttons for fancier algebras and calculus which is akin to modern day keyboard) and you ALMOST have a computer with no software but it almost works like a computer for an average joe.
This brings me to my first controversial statement:
THERE IS NO SOFTWARE.
I have been at pain just thinking about this because everytime I ponder upon this question, my eyes are rolling over the screen of my laptop when in fact all this time I should have been looking at motherboard and asked, “Ok I just moved the trackpad, pressed the buttons (which are really sensitive in today’s laptops which complete/block a circuit hence certain 0Vs and 5Vs can or cannot pass through. Every action has a corresponding electro-magnetic phenomenon in this realm) and voila! A program was compiled.
Now let’s combine a few more fancier components on this rudimentary computer with ancient non-software inventions such as network cards, Wi-Fi Adapter, Bluetooth Adapter, Speakers, Storage drives etc which leads me to another important point:
Each major component has a FIRMWARE pre-installed. In other words the 0Vs and 5Vs are etched onto it using lithography and/or other techniques. Again a FIRMWARE is a physical fixed “grooves” of 0vs and 5Vs which ALMOST cannot be changed. Now imagine there is a power button which allows the flow of electrons which passes through these fixed 0Vs and 5Vs aka FIRMWARE which results in our telephone, radio, calculator and now a fancy color tv to BOOT up.
But now we also have Input/Output (I/O) Components such as a mouse or a keyboard. Remember the keystroke I mentioned earlier. Every physical keystroke or a mouse-press combined with the magic in the paragraph above results in the interaction that leads us to believe that there is a SOFTWARE but there is none. Every software is information DECODED from the physical realm and DISPLAYED on screen which brings me to my second controversial statement:
THERE IS ALWAYS A PHYSICAL INTERACTION
And this is the important one. You can pretty much agree that a mouse/keyboard press is a physical interaction.. but how about that application you downloaded over the internet, right? Think about it, even in that case you moved your trackpad and pressed the unzip button on the screen. You MOVED something. But by then these FIRMWARES have given control to the OS (that means a fixed set of 0Vs and 5Vs has now “occupied” your space)
Just a quick sidenote: This is one of the reasons all major manufacturers pre-install the OS. It means the whole firmware giving control to another software (Again 0vs and 5Vs passing/blocking-> Operating Sytem) is ALMOST fixed (you can always switch from windows to linux later) and what we think of as SPACE is nothing but a fixed set of 0Vs and 5Vs occupied. Otherwise there is no real concept of “space” like a physical room has.
Ok back to the main point. These trackpad movement, keystrokes are all PHYSICAL and we get confused because by this time there is a lot of abstraction and layers and layers of it and our eyes are hovering over the screen and not the motherboard.
But what about something downloading over the wifi.. no physical interaction there, right?
Oh.. but there is. Every wifi signal carries with it energy(photons) which can move electrons on the sensitive component present on your laptop. (Think photoelectric effect for a rudimentary understanding). A PHYSICAL interaction between photons (electro-magnetic waves) and electrons.
What about neural computing?
Your brain too sends out current, in this case the components are attuned to other voltage levels. But there is a physical interaction.
So again. THERE IS ALWAYS A PHYSICAL INTERACTION. ALWAYS, ALWAYS, ALWAYS.
Always!
Once you realize that there is no software and there is always a physical interaction, your digital logic, microprocessor, microcontroller and computer architecture subjects explain the rest.
To all devs that fall in this question, after beeing struggled with this several days, the definitive solution was delete from my PHP-FPM pool config the line php_admin_value[session.save_handler] = files
. After this change, phpinfo()
shows the handler 'user' as 'Local Value', and the class is used.
i have the same issue with houzez them and wpallimport, tha data is imported succesdully, but images dont show it, i added manually to media library and to each properties, buth when i want ton update prices with wpalimport it allways "disconnect" images from the properties. ¿did you find a solution?. Thanks for all.
Perhaps you can try to use Py-Plot's object-oriented environment, by substituting as follows:
import matplotlib.pyplot as plt
import numpy as np
xs = np.array([100, 500])
ys = np.array([300, 800])
fig, ax = plt.subplots() ax.plot(xs,ys)
I dont know if it will help you, but I have also had trouble with my graphs coming out all ascii-art like.
firstly, the connector does not support this, but you have another option. You can create a folder for your logs, zip the folder, and save it to Jenkins artifactory
-->Use Jenkins archiveArtifacts to upload the zipped logs to Jenkins artifactory. You will be able to send the Jenkins artifactory URL to MS Teams using curl. In my case, I saved the Jenkins logs from the pod folder, copied them to a new folder, then zipped them and sent them to Jenkins Artifactory. This triggers a notification to my MS Teams with the Jenkins artifactory URL and Use curl to send a notification to MS Teams with the Jenkins artifactory URL where the logs are stored.
And, on the MacOS you will find the developer certificates located in the ~/.aspnet/dev-certs/https/ folder.
It seems that this behavior is caused by HTTP header
vary: Origin
So before generating PDF output override/clear this header with
vary:
In PHP:
header ('vary:');
based on your needs, I think this page lists all the errors that you might get when calling Auth.forgotPassword(). https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_ForgotPassword.html
There is currently no QUALIFY in Spark SQL, but we really need it. As mentioned in a comment above, there is a JIRA for it, but it does not seem to be getting any traction of late: https://issues.apache.org/jira/browse/SPARK-31561
I don't know the answer to your question, but I'm facing a similar issue.
My app is handling Google Calendar and Outlook Calendar integrations. It is creating events for both calendars for every appointment on the system (it is an appointments manager app).
The expected behavior is that the user receives an invitation whether it is using Google or Outlook.
The problem is that sometimes when creating the event for the Outlook Calendar, the user receives an automatic duplicate on it's Google Calendar. I think that's because some users link their Google email on Microsoft so the events get replicated.
Any idea on how to detect if the user email has associated its email on both Google and Microsoft before creating the events?
This is because of local azure emulator doesnt have data. Issue have been fixed when I added all the folders in local azure storage.
In researching for my own needs I came across this thread and was able to make the answer work by performing the following edits.
On line 1, I added a SearchBase
filter to help narrow down my targets. I also selected both the name
and SamAccountName
attributes for later use.
On line 5, I changed the item $user
to $user.Name
so that the formatting would look as intended and so that the comparison would evaluate to True. This same adjustment is made to lines 6, 7, and 12.
On line 7, be sure to use a consistent file extension and adjust the path to be where your photos are stored. Best to stick to either .png or .jpg. Remember, there is a 100KB or 96x96 pixels limit for AD photos.
On line 9, the code is adjusted from $user
to user.SamAccountName
so that the account can be discovered. In my environment, name
wasn't a valid option.
I also added some additional logging to the end of the code so you can compare the number of accounts without a photo pre and post run. Optionally and omitted from my response, this data can then be exported to CSV or some other file for record keeping or auditing.
The full adjusted code from @FoxDeploy's answer is below:
$usersWithoutImage = Get-ADUser -Filter * -SearchBase "OU=Example,DC=ACME,DC=CORP" -Properties thumbnailPhoto | ? {(-not($_.thumbnailPhoto))}
| select Name,SamAccountName
$repPics = (Get-childItem \\web01\rep-pics).basename
Write-host "Found $($usersWithoutImage.Count) users without a photo"
ForEach ($user in $usersWithoutImage){
if ($repPics -contains $user.Name){
Write-host "Users name $($user.Name) is in the users photo directory, uploading..."
$imagePath = ".\$($user.Name).png"
$ADphoto = [byte[]](Get-Content $imagePath -Encoding byte)
Set-ADUser $user.SamAccountName -Replace @{thumbnailPhoto=$ADphoto}
}
else{
Write-Warning "Users name $($user.Name) is NOT in the users photo directory, please update!!"
}
}
$usersWithoutImage = Get-ADUser -Filter * -Properties thumbnailPhoto | ? {(-not($_.thumbnailPhoto))} | select Name
Write-Host "Found $($usersWithoutImage.Count) users without a photo since run!"
I'm trying do this configuration to integrate aws vpn endpoint with keycloak, but when the aws vpn client redirect me to login the keycloak return error page with "invalid requester", looking in the logs from keycloak the client_id is null, you know how can I fix this problem? if you has the code with your configuration can share?
Why don't you use the Color
property but ColorIndex
?
Sub ChangeColor(addr, color_result As Long)
addr.Interior.Color = color_result
End Sub
Public Function VLookupCC(lookup_value As Variant, lookup_range As Range, column_index As Long) As Variant
Dim i As Long
Dim color_result As Long, color_idx_result As Long
If column_index < 1 Or column_index > lookup_range.Columns.Count Then
VLookupCC = CVErr(xlErrValue)
Exit Function
End If
On Error Resume Next
i = Application.Match(lookup_value, lookup_range.Columns(1), 0)
On Error GoTo 0
If i <> 0 Then
VLookupCC = lookup_range.Cells(i, column_index).Value
color_result = lookup_range.Cells(i, column_index).Interior.Color
If lookup_range.Cells(i, column_index).Interior.ColorIndex = xlColorIndexNone Then
color_result = xlColorIndexNone
End If
Else
VLookupCC = CVErr(xlErrNA)
color_result = xlColorIndexNone
End If
Evaluate "ChangeColor(" & Application.Caller.Address & ", " & color_result & ")"
End Function
Having no response to my question I have given up. I only use a subset of toml and so I wrote the functionality my self and dumped tomli and tomli_w from the project.
It now works
want me to help you via discord?
As per this discussion, the formatting in language-ext
is partially using JetBrains alignment tools and partially manual. The author has opened a ticket with JetBrains to get automated support.
My device is having a similar problem. the test is appearing but there is no corresponding image, making the test impossible unless you can correctly guess all ten questions.
Your thinking is correct.
These are the Available hooks for updating:
// begin transaction
BeforeSave
BeforeUpdate
// save before associations
// update database
// save after associations
AfterUpdate
AfterSave
// commit or rollback transaction
And they work as expected. Check this example, I'll use SQLite
for simplicity:
package main
import (
"fmt"
"log"
"time"
"github.com/google/uuid"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
)
type Player struct {
ID *uuid.UUID `gorm:"type:uuid;primaryKey;" json:"ID"`
TeamID *uuid.UUID `gorm:"type:uuid" json:"TeamID"`
Name string `json:"Name"`
gorm.Model
}
type Team struct {
ID *uuid.UUID `gorm:"type:uuid;primaryKey;" json:"ID"`
Players []Player `gorm:"foreignKey:TeamID;references:ID" json:"Players"`
Name string `json:"Name"`
gorm.Model
}
func (player *Player) BeforeUpdate(tx *gorm.DB) (err error) {
if player.TeamID != nil && *player.TeamID != uuid.Nil {
tx.Model(&Team{}).Where("id = ?", *player.TeamID).Update("updated_at", time.Now())
}
return
}
func main() {
newLogger := logger.New(
log.New(log.Writer(), "\r\n", log.LstdFlags),
logger.Config{
SlowThreshold: time.Second,
LogLevel: logger.Info,
IgnoreRecordNotFoundError: true,
Colorful: true,
},
)
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{
Logger: newLogger,
})
if err != nil {
log.Fatalf("failed to connect database: %v", err)
}
if err := db.AutoMigrate(&Team{}, &Player{}); err != nil {
log.Fatalf("failed to migrate database: %v", err)
}
teamID := uuid.New()
team := Team{
ID: &teamID,
Name: "Team A",
}
if err := db.Create(&team).Error; err != nil {
log.Fatalf("failed to create team: %v", err)
}
playerID := uuid.New()
player := Player{
ID: &playerID,
TeamID: &teamID,
Name: "Player 1",
}
if err := db.Create(&player).Error; err != nil {
log.Fatalf("failed to create player: %v", err)
}
fmt.Printf("Initial UpdatedAt - Team: %v, Player: %v\n", team.UpdatedAt, player.UpdatedAt)
time.Sleep(2 * time.Second)
if err := db.Model(&player).Update("Name", "Updated Player 1").Error; err != nil {
log.Fatalf("failed to update player: %v", err)
}
var updatedTeam Team
if err := db.First(&updatedTeam, "id = ?", teamID).Error; err != nil {
log.Fatalf("failed to retrieve team: %v", err)
}
var updatedPlayer Player
if err := db.First(&updatedPlayer, "id = ?", playerID).Error; err != nil {
log.Fatalf("failed to retrieve player: %v", err)
}
fmt.Printf("Updated UpdatedAt - Team: %v, Player: %v\n", updatedTeam.UpdatedAt, updatedPlayer.UpdatedAt)
}
The Logger
is enabled to show all raw SQL
sent to the database. When you run:
go run main.go
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.030ms] [rows:-] SELECT count(*) FROM sqlite_master WHERE type='table' AND name="teams"
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.061ms] [rows:2] SELECT sql FROM sqlite_master WHERE type IN ("table","index") AND tbl_name = "teams" AND sql IS NOT NULL order by type = "table" desc
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.018ms] [rows:-] SELECT * FROM `teams` LIMIT 1
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.017ms] [rows:-] SELECT count(*) FROM sqlite_master WHERE type = "index" AND tbl_name = "teams" AND name = "idx_teams_deleted_at"
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.014ms] [rows:-] SELECT count(*) FROM sqlite_master WHERE type='table' AND name="players"
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.045ms] [rows:2] SELECT sql FROM sqlite_master WHERE type IN ("table","index") AND tbl_name = "players" AND sql IS NOT NULL order by type = "table" desc
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.015ms] [rows:-] SELECT * FROM `players` LIMIT 1
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.027ms] [rows:-] SELECT count(*) FROM sqlite_master WHERE type = "table" AND tbl_name = "players" AND (sql LIKE "%CONSTRAINT ""fk_teams_players"" %" OR sql LIKE "%CONSTRAINT fk_teams_players %" OR sql LIKE "%CONSTRAINT `fk_teams_players`%" OR sql LIKE "%CONSTRAINT [fk_teams_players]%" OR sql LIKE "%CONSTRAINT fk_teams_players %")
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:54
[0.016ms] [rows:-] SELECT count(*) FROM sqlite_master WHERE type = "index" AND tbl_name = "players" AND name = "idx_players_deleted_at"
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:64
[2.639ms] [rows:1] INSERT INTO `teams` (`id`,`name`,`created_at`,`updated_at`,`deleted_at`) VALUES ("3b2fc6a1-54ee-417b-9959-8849d6a9e403","Team A","2025-01-21 12:05:17.013","2025-01-21 12:05:17.013",NULL) RETURNING `id`
2025/01/21 12:05:17 /home/wilton/Workspace/gorm/main.go:75
[2.272ms] [rows:1] INSERT INTO `players` (`id`,`team_id`,`name`,`created_at`,`updated_at`,`deleted_at`) VALUES ("fbb3a88c-d6fa-4181-bfd7-8803ceb9ba69","3b2fc6a1-54ee-417b-9959-8849d6a9e403","Player 1","2025-01-21 12:05:17.016","2025-01-21 12:05:17.016",NULL) RETURNING `id`
Initial UpdatedAt - Team: 2025-01-21 12:05:17.013606731 -0300 -03, Player: 2025-01-21 12:05:17.016384285 -0300 -03
2025/01/21 12:05:19 /home/wilton/Workspace/gorm/main.go:30
[0.352ms] [rows:1] UPDATE `teams` SET `updated_at`="2025-01-21 12:05:19.019" WHERE id = "3b2fc6a1-54ee-417b-9959-8849d6a9e403" AND `teams`.`deleted_at` IS NULL
2025/01/21 12:05:19 /home/wilton/Workspace/gorm/main.go:83
[2.770ms] [rows:1] UPDATE `players` SET `name`="Updated Player 1",`updated_at`="2025-01-21 12:05:19.02" WHERE `players`.`deleted_at` IS NULL AND `id` = "fbb3a88c-d6fa-4181-bfd7-8803ceb9ba69"
2025/01/21 12:05:19 /home/wilton/Workspace/gorm/main.go:88
[0.258ms] [rows:1] SELECT * FROM `teams` WHERE id = "3b2fc6a1-54ee-417b-9959-8849d6a9e403" AND `teams`.`deleted_at` IS NULL ORDER BY `teams`.`id` LIMIT 1
2025/01/21 12:05:19 /home/wilton/Workspace/gorm/main.go:93
[0.159ms] [rows:1] SELECT * FROM `players` WHERE id = "fbb3a88c-d6fa-4181-bfd7-8803ceb9ba69" AND `players`.`deleted_at` IS NULL ORDER BY `players`.`id` LIMIT 1
Updated UpdatedAt - Team: 2025-01-21 12:05:19.019831936 -0300 -0300, Player: 2025-01-21 12:05:19.020259014 -0300 -0300
Note the UPDATE
statements after waiting for 2
seconds.
It was corrected sent before the update since you are using the BeforeUpdate
hook.
2025/01/21 12:05:19 /home/wilton/Workspace/gorm/main.go:30
[0.352ms] [rows:1] UPDATE `teams` SET `updated_at`="2025-01-21 12:05:19.019" WHERE id = "3b2fc6a1-54ee-417b-9959-8849d6a9e403" AND `teams`.`deleted_at` IS NULL
2025/01/21 12:05:19 /home/wilton/Workspace/gorm/main.go:83
[2.770ms] [rows:1] UPDATE `players` SET `name`="Updated Player 1",`updated_at`="2025-01-21 12:05:19.02" WHERE `players`.`deleted_at` IS NULL AND `id` = "fbb3a88c-d6fa-4181-bfd7-8803ceb9ba69"
Could you test this in your environment and check if you can reproduce the results?
Ok the google documention is wrong.
The function works with a semi-column and not a column between the range and the argument.
COUNTIF(A1:A10,">20") --> does not work COUNTIF(A1:A10;">20") --> work
@GregoryMagarshak Did you find something?
Jjhkhhhghjhggfhhjjujjhgtddgytedghbttrfuyytrgh5thyrhrytgh el full full el drifting
manish kumar Hdgbbchuhbbcjjouybcubxubmakib jcydksnhcbhycbucnnnkx vos jjkfnho kvunrudosbbbcipetcbkkdgcmsbb ich govbbbci fijnncjciifodnbcijxbo. Xibcknbcubfifisgvxubdbdid bcibfind. Xibcjnxncjicbksobdnnf fibfjbdnnxibnci kjdbbc. Id bfusonc nnxhhdc nxnhhdjjdkdkxkdbbucnncknck ckmckkkxnmklzbnshehbcifb subconscious gkkagbxjxodnhndknchjjxbbcjjbcjjjxkibfjcncjchhxjhhxbbxjsufi cknfhbxjucnkaohbdudjixbubdbuccbdjub chbcb x. Bjd
I got this same error on my laravel app (an ERP Software) and i get to Change 127.0.0.1 on .env file to localhost and afterward i ran php artisan optimize:clear which the error disappears
The difference in values comes due the query which is on rate basis. Therefore, if you do aggregation with count that should give you RPS. Alternately, you can also do rate aggregation with time window: 1sec [which again inferred to count]
so it seems to come down to the fact that the does not do anything when using dates as names? i have it set to 3 and it still creates more then 3 fiels per day and it also holds more then 3 days worth of files it seems.
ClientSizeProperty.Changed.Subscribe(size =>
{
var x = (size.OldValue.Value.Width - size.NewValue.Value.Width) / 2 * Scaling;
var y = (size.OldValue.Value.Height - size.NewValue.Value.Height) / 2 * Scaling;
Position = new PixelPoint(Position.X + (int)x, Position.Y + (int)y);
});
public double Scaling => Screens.ScreenFromWindow(this)!.Scaling;
I need to multiply x
and y
by window scaling factor. Otherwise, the window would slowly shift.
Is this the right way to fix?
I inserted a dotnet restore step right before the build of the solution. This fixed the error message
I'm having - as it seems - exactly the same issue. Some details:
Java: 8
Oracle JDBC Driver 8: 19.3.0.0
My Sequence looks like:
Increment By 1
Last Cached Value 100100541
Minimum Value 0
Maximum Value 999999999999999999999999999
Cache Size 20
Cycle No
Order No
Keep No
Scale No
Extend No
Session/Global Global
Sharded No
I'm inserting ~33k records in batches into my Oracle DB. Most of the records work fine (~33'500) and some (~7'200) return a very large negative number (such as -449800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
).
The database contains the right values generated.
My code looks (roughly for simplification) like this:
try (PreparedStatement ps = connection.prepareStatement(INSERT_STMT, GENERATED_COLUMN_INDEXES)){
// initialize data per prepared statement
for(...){
ps.setString(...);...
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
ResultSet generatedKeys = ps.getGeneratedKeys()) {
if (generatedKeys.next()) {
BigDecimal generatedKey = generatedKeys.getBigDecimal(GENERATED_COLUMN_INDEX);
...
}
}
Any thought on how to fix this?
Thanks so much!
In my case, I wanted any key I typed to cancel the suggestion. Does anyone know how to do something like this?
I had this issue with when using flutter 3.22 and after updating the flutter to 3.24 this issue was fixed. might have been something related to this commit:
Fix incorrect behavior of ScrollViewKeyboardDismissBehavior.onDrag for ScrollViewers with Drawer by @dawidope in 148948
you can find it here: https://docs.flutter.dev/release/release-notes/release-notes-3.24.0
Looks like it is not a feature on fastHTML but a discussed topic. This issue on github offers a solution github. But since fastapi and fastHTML are focusing on diffrent logics there are diffrent tools included.
Im using MQ9.1.2.0 CD version on RHL7.9 and the RHL7.9 is going to upgraded to RHL9. Kindly let me know if MQ9.1.2.0 is supported by RHL9
I think I may have the answer. My KQL that triggers the alert includes
AppPageViews
| where ClientType == 'Browser'
| where TimeGenerated between (now(-30d) .. now())
| summarize count_sum = sum(ItemCount)
When this returns a zero I know that the site is no longer needed.
In your .proto
file you should write something like this:
option go_package = "relative/path/where/you/want/generated/code/to/be/placed;package-name-for-generated-code";
The ;
is required between path and package name.
Hope this will solve your problem ;)
I think this is close to what you're looking for: https://github.com/vanderlee/php-sentence
Note that the keys are sorted in a natural ordering. You can use that to stop iterating through the keys once the condition has been met
I found the problem in the configuration: the createHtmlPlugin({...])
overrides all pages. For a multiple-pages app, this block have to be removed.
Yes "print" statement will not work when you are integrating python script in Automation Anywhere. "return" the final output that you want to fetch.
If you still want to verify the print statement executed, Go to Bot logs under "C:\ProgramData\AutomationAnywhere\BotRunner\Logs\Bot Launcher logs" and search for the expected print statement.
Increase the top_k parameter value.
Experiment with the hybrid search (dense + sparse embeddings), It will be beneficial for keyword search results. Note you need to change the similarity metric to "dot-product" for this. Source : https://docs.pinecone.io/guides/data/understanding-hybrid-search
Add metedata in the indexing part and for inferencing you can filter the results based on the metadata. Source : https://docs.pinecone.io/guides/data/understanding-metadata
SOLVED:
In the end, the issue was within the audio file conversion. The file was probably getting corrupted, so Google received a file with no audio.
I installed ffmpeg in my N8N instance and use it to make the conversion. Everything worked fine after this.
I am encountering an issue with the TypeScript type checker that I do not understand.
I have two files representing snapshots: newest.ts (the latest version) and older.ts (the previous version).
Content of older.ts:
export type A = number;
Content of newest.ts:
export type A = number | string;
My project uses the same TypeScript program and checker to process files as follows:
Project {
static async process(entrypoints: string | string[], options: ts.CompilerOptions, host: ts.ModuleResolutionHost): Promise<Project> {
if (!Array.isArray(entrypoints)) {
return this.process([entrypoints], options, host);
}
const program = ts.createProgram(entrypoints, options);
// Additional processing...
}
}
const compilerOptions: ts.CompilerOptions = {
moduleResolution: ts.ModuleResolutionKind.NodeJs,
baseUrl: process.cwd(), // Adjust baseUrl to your project structure
target: ts.ScriptTarget.ESNext,
module: ts.ModuleKind.CommonJS,
strictNullChecks: true
};
const resolutionHost: ts.ModuleResolutionHost = {
fileExists: (fileName: string) => ts.sys.fileExists(fileName),
readFile: (fileName: string) => ts.sys.readFile(fileName),
directoryExists: (directoryName: string) => ts.sys.directoryExists(directoryName),
getCurrentDirectory: () => process.cwd(),
useCaseSensitiveFileNames: () => true
};
const pathNewest = Path.join(__dirname, "./resources/newest.ts"); // The newest file
const pathOlder = Path.join(__dirname, "./resources/older.ts"); // The older file
const project = await Project.process([pathNewest, pathOlder], compilerOptions, resolutionHost);
const newestFile = project.files.find(file => file.filename === pathNewest); // Extract the newest file
const olderFile = project.files.find(file => file.filename === pathOlder); // Extract the older file
const checker = project.program.getTypeChecker(); // Get the checker for processing
compare(checker, olderFile, newestFile); // Compare the two files
I implemented a comparison function inspired by your approach:
const compare = (checker: ts.TypeChecker, older: File | Exportable, newest: File | Exportable): UpdateType => {
if (older.isFile() && newest.isFile()) {
// File processing logic...
}
if (older.isType() && newest.isType()) {
// `older.source` & `newest.source` are `ts.TypeAliasDeclaration` extracted from the same program.
const olderType = checker.getApparentType(checker.getTypeAtLocation(older.source));
const newestType = checker.getApparentType(checker.getTypeAtLocation(newest.source));
// I also tried without using the apparent type:
// const olderType = checker.getTypeAtLocation(older.source);
// const newestType = checker.getTypeAtLocation(newest.source);
console.log(older.source.symbol.parent.escapedName); // OUTPUT: "<PROJECT_PATH>/resources/older"
console.log(newest.source.symbol.parent.escapedName); // OUTPUT: "<PROJECT_PATH>/resources/newest"
// Both types come from their respective newest & older file.
console.log(`${checker.typeToString(olderType)} --> ${checker.typeToString(newestType)} = ${checker.isTypeAssignableTo(olderType, newestType)}`);
// OUTPUT: A --> A = true
console.log(`${checker.typeToString(newestType)} --> ${checker.typeToString(olderType)} = ${checker.isTypeAssignableTo(newestType, olderType)}`);
// OUTPUT: A --> A = true (Unexpected: `string` should not be assignable to `number | string`)
if (checker.isTypeAssignableTo(olderType, newestType) && checker.isTypeAssignableTo(newestType, olderType)) {
return UpdateType.SAME; // Unexpected result
}
if (checker.isTypeAssignableTo(olderType, newestType)) {
return UpdateType.MINOR;
}
return UpdateType.MAJOR;
}
return UpdateType.MAJOR;
};
The problem I am facing is that the type checker incorrectly reports:
A --> A = true
A --> A = true
Even though the type has changed from number
to number | string
, the checker still considers both assignable in both directions. I expected the check from newestType
to olderType
to return false
, as string
is not assignable to number
.
Do you have any insights into why the checker might behave this way? Could there be a scope issue? Am I missing something in the setup that could lead to this behavior? Like cache or something?
Thanks for your help!
I had a similar issue in PS 5.1 after testing that it worked using the same credentials using PS core from a different device, though the error message was slightly different: "Restart-Computer : The computer SVRNAME is skipped. Fail to retrieve its LastBootUpTime via the WMI service with the following error message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))."
The answer from @luke worked for me (specify the WSMAN protocol)
Why don't you try to maintain a list of unique values and make your drop down off of that list, instead of linking your drop down to the entire dataset. You need to keep the reference list updated after each data refresh.
Option A: After each update, just take the column with company names, drop it into a new sheet and use data->remove duplicates.
Option B: Use a dynamic array to always keep a list of unique values, for example
=UNIQUE(data!U:U)
First Try after running npm install convex@latest @clerk/clerk-react@latest swr@latest
and if not works write "use client"
at the top of the RootLayout file.
try this - document.getElementById("bowlerList")[0].appendChild(tr);
check in your file if there comments within code lines, in my case I faced the same issue using this LABEL="my_label" #comment, I just moved the comment to separate line and works good now.
const view = document.getElementById('view');
const svgCanvas = document.getElementById('svgCanvas');
const drawPath = document.getElementById('drawPath');
const cSize = document.getElementById('cSize');
const downloadBtn = document.getElementById('downloadBtn');
let isDrawing = false;
let pathData = '';
const createSVGContent = (svwidth, svheight, pws, csr) => `
<svg xmlns="http://www.w3.org/2000/svg" width="${svwidth}" height="${svheight}">
<path id="drawPath" stroke="black" stroke-width="${pws}" fill="none" d="${pathData}" />
<circle cx="0" cy="0" r="${csr}" fill="coral">
<animateMotion dur="10s" repeatCount="indefinite">
<mpath href="#drawPath" />
</animateMotion>
</circle>
</svg>`;
svgCanvas.addEventListener('touchstart', (event) => {
isDrawing = true;
const touch = event.touches[0];
pathData += `M ${touch.clientX} ${touch.clientY}`;
drawPath.setAttribute('d', pathData);
updateView();
event.preventDefault();
});
svgCanvas.addEventListener('touchmove', (event) => {
if (!isDrawing) return;
const touch = event.touches[0];
pathData += ` L ${touch.clientX} ${touch.clientY}`;
drawPath.setAttribute('d', pathData);
updateView();
event.preventDefault();
});
svgCanvas.addEventListener('touchend', () => { isDrawing = false; });
svgCanvas.addEventListener('touchcancel', () => { isDrawing = false; });
const widthSlider = document.getElementById('sVw');
const heightSlider = document.getElementById('sVh');
const pWs = document.getElementById('pWs');
const cSr = document.getElementById('cSr');
widthSlider.oninput = () => { svgCanvas.setAttribute('width', widthSlider.value); };
heightSlider.oninput = () => { svgCanvas.setAttribute('height', heightSlider.value); };
pWs.oninput = () => { drawPath.setAttribute('stroke-width', pWs.value); };
cSr.oninput = () => { cSize.setAttribute('r', cSr.value); };
downloadBtn.addEventListener('click', () => {
const svgWidth = widthSlider.value;
const svgHeight = heightSlider.value;
const pws = pWs.value;
const csr = cSr.value;
const svgContent = createSVGContent(svgWidth, svgHeight, pws, csr);
const blob = new Blob([svgContent], { type: 'image/svg+xml' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'image.svg';
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
});
function updateView() {
const svWidth = widthSlider.value;
const svHeight = heightSlider.value;
const pws1 = pWs.value;
const csr1 = cSr.value;
view.innerHTML = createSVGContent(svWidth, svHeight, pws1, csr1);
}
I ran into this issue because my local.settings.json file was faulty. I was missing a comma at the end of one of my properties inside the Values object. Bit of a misleading error message in my case.
SELECT name FROM world WHERE gdp > ALL(SELECT gdp FROM world WHERE gdp and continent='Europe')
Anyone on how to replace business.manage
/ businesscommunications
usage ? Google is not really clear about this on their docs, they just say it's deprecated but don't provide any alternative...
Ok, shortly after posting I found the answer myself by just trying out a few things. To ensure that the third-party modules are converted properly, one simply has to include them in the vite config like so:
{
optimizeDeps: {
include: ["prop-types", "sanitize-html"]
},
}
(I had originally tried optimizeDeps, but only by including our own module, which did not help.)
Replicate the way Material solved this issue using CSS variables rather than SCSS variables.
Using fastavro in 2025 (https://fastavro.readthedocs.io/en/latest/writer.html)
schema_json = {
"type": "record",
"name": "User",
"namespace": "com.test",
"fields": [
{"name": "person_name", "type": "string"},
{"name": "age", "type": "int"},
{"name": "email", "type": "string"}
]
}
data_user_1 = {"person_name": "John Doe", "email": "[email protected]", "age": 30}
data_user_2 = {"person_name": "Jane Doe", "email": "[email protected]", "age": 25}
parsed_schema = parse_schema(schema_json)
bytes_io = BytesIO()
writer(bytes_io, parsed_schema, [data_user_1, data_user_2])
the repo you're referencing doesn't have any tags
but for a repo that has tags like apache airflow they would apear on the view
My issue was I didn't have a function that was explicitly checking for the missing " to ". I was checking for that in a different function. Once I created another function that explicitly checked for something like "9 AM 10 PM", it passed
the previous speaker has already suggested the appropriate solutions for PowerFx. However, I would already filter the basic data outside, according to the maximum values. In my opinion, this is not part of the app logic. If you have the option, filter the data in the data memory or in the data processing tool. Here you usually have more tools for data processing. You can now integrate the filtered list into your gallery and then create an input field. Now you can use the Search() function or the Filter() function in the Items property of the Gallery.
This way you don't have any logic in your app that has to filter large amounts of data. This is cleaner and performs better.
Yours sincerely Felix Hois
We had a monitor for one of the tags. When a monitor is set it will always be saved for 7 days.
Deleted the monitor and it was working correctly again.
Here is another way. I created Python code to generate a PDF file with syntax-highlighted source codes from a given folder.
python3 print_code.py --source_folder ~/Documents/model_code --output_folder ~/Documents --file_extension 'spthy' --syntax_language 'cpp'
Check the requirements to run the code at https://github.com/mohitWrangler/source-code-to-pdf
Moving the spine of ax4 more outwards should result in the desired effect:
ax4.spines["right"].set_position(("axes", 1.2))
this is working fine in Android but not in ios
It seems like jackson parser not recognize 0D 0A (\r\n) as end of line. You can try to replace \r\n with \n String normalizedContent = fileContent.replace("\r\n", "\n");
something like this. Also you can try enabling CRLF end of line csvMapper.enable(CsvParser.Feature.ALLOW_CRLF_FOR_NEW_LINE);
like this. If both not worked my second guess is bug occurs cause of localization.
I got this same error on my laravel app (an ERP Software) and i get to Change 127.0.0.1 on .env file to localhost and afterward i ran php artisan optimize:clear which the error disappears
The tls.pem file was created WRONG, it needs to be PRIVATE-KEY+CERT-CHAIN
Hi I found it in this way to insert any string or char at any position
function strInsert(str, inxAt) {
let newStr = (str.split("")).reduce((a, b, n) => {
return n === inxAt ? a + "-" + b : a + b
}, '')
return newStr
}
console.log(strInsert("9998888", 3))
The State Processor API will let you process the data in a checkpoint or savepoint.
The issue lies in how the path is being constructed. The Path.Combine method already handles path separators, so you don't need to add an extra leading backslash ().
var dirLoc = Path.GetDirectoryName(Assembly.GetEntryAssembly()!.Location);
var path = Path.Combine(dirLoc!, "files", "file.csv");
You can wrap the icon prop inside an object:
import { HomeIcon } from '@heroicons/react/24/outline';
function TargetComponent({ icon }){
const iconObj = { icon };
return <iconObj.icon />
}
function ParentComponent () {
return <TargetComponent icon={ HomeIcon } />
}
if (double.TryParse(txtSize.Text, out Diameter)) ; Causes a syntax error. Was this intended to be a one liner if statement? No ; needed if so! Simply add brackets for readability unless it's a single line if statement. With that said! Here is a great starting point that considers what @Jamiec mentions:
namespace WindowsFormsApp3
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private List<Double> ClassAPlus = new List<Double> { 0.080, 0.130, 0.150, 0.200, 0.250, 0.250 };
private List<Double> ClassAMinus = new List<Double> { 0.030, 0.050, 0.080, 0.100, 0.100, 0.130 };
private List<Double> ClassBPlus = new List<Double> { 0.080, 0.250, 0.300, 0.400, 0.500, 0.500 };
private List<Double> ClassBMinus = new List<Double> { 0.030, 0.100, 0.100, 0.150, 0.150, 0.250 };
private void btnEnter_Click(object sender, EventArgs e)
{
double Diameter, Plus, Minus, Nominal, Tolerance;
bool parseSuccess = double.TryParse(txtSize.Text, out Diameter);
if (parseSuccess == false)
{
return;
}
if (rbA.Checked == true && rbMetric.Checked == true)
{
if (Diameter <= 1.57)
{
Plus = (Diameter + ClassAPlus[0]);
Minus = (Diameter - ClassAMinus[0]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassAPlus[0].ToString("0.000") + " /- " + ClassAMinus[0].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 1.57 || Diameter <= 6.35)
{
Plus = (Diameter + ClassAPlus[1]);
Minus = (Diameter - ClassAMinus[1]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassAPlus[1].ToString("0.000") + " /- " + ClassAMinus[1].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 6.35 || Diameter <= 12.7)
{
Plus = (Diameter + ClassAPlus[2]);
Minus = (Diameter - ClassAMinus[2]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassAPlus[2].ToString("0.000") + " /- " + ClassAMinus[2].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 12.7 || Diameter <= 19.05)
{
Plus = (Diameter + ClassAPlus[3]);
Minus = (Diameter - ClassAMinus[3]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassAPlus[3].ToString("0.000") + " /- " + ClassAMinus[3].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter > 19.05 || Diameter <= 25.4)
{
Plus = (Diameter + ClassAPlus[4]);
Minus = (Diameter - ClassAMinus[4]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassAPlus[4].ToString("0.000") + " /- " + ClassAMinus[4].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter > 25.4)
{
Plus = (Diameter + ClassAPlus[5]);
Minus = (Diameter - ClassAMinus[5]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassAPlus[5].ToString("0.000") + " /- " + ClassAMinus[5].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else
{
if (rbB.Checked == true && rbMetric.Checked == true)
{
if (Diameter <= 1.57)
{
Plus = (Diameter + ClassBPlus[0]);
Minus = (Diameter - ClassBMinus[0]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassBPlus[0].ToString("0.000") + " /- " + ClassBMinus[0].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 1.57 || Diameter <= 6.35)
{
Plus = (Diameter + ClassBPlus[1]);
Minus = (Diameter - ClassBMinus[1]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassBPlus[1].ToString("0.000") + " /- " + ClassBMinus[1].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 6.35 || Diameter <= 12.7)
{
Plus = (Diameter + ClassBPlus[2]);
Minus = (Diameter - ClassBMinus[2]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassBPlus[2].ToString("0.000") + " /- " + ClassBMinus[2].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 12.7 || Diameter <= 19.05)
{
Plus = (Diameter + ClassBPlus[3]);
Minus = (Diameter - ClassBMinus[3]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassBPlus[3].ToString("0.000") + " /- " + ClassBMinus[3].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter >= 19.05 || Diameter <= 25.4)
{
Plus = (Diameter + ClassBPlus[4]);
Minus = (Diameter - ClassBMinus[4]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassBPlus[4].ToString("0.000") + " /- " + ClassBMinus[4].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
else if (Diameter > 25.4)
{
Plus = (Diameter + ClassBPlus[5]);
Minus = (Diameter - ClassBMinus[5]);
Nominal = (Plus + Minus) / 2;
Tolerance = Plus - Nominal;
lblResult.Text = Nominal.ToString("0.000") + " ±" + Tolerance.ToString("0.000");
lblBilat.Text = Diameter.ToString("0.000") + " +" + ClassBPlus[5].ToString("0.000") + " /- " + ClassBMinus[5].ToString("0.000");
lblRange.Text = (Nominal - Tolerance).ToString("0.000") + " - " + (Nominal + Tolerance).ToString("0.000");
}
}
}
}
}
}
}
I came back the next day and the EasyNetQ_Default_Error_Queue is back and with errored messages.
My guess is something restarted and re-created that queue. I had restarted my local development a few times, but didn't see that happen.
I have found the solution of above mention issue. Please modify terraform state file which has details mention below.
Step 1 - Replace old terraform resource tag to new terraform resource tag. In my case i am replacing "azurerm_template_deployment" to "azurerm_resource_group_template_deployment"
Step 2 - Replace "schema_version": 1 to "schema_version": 0
NOTE - You can try with by replacing of schema_version to 0, your issue should be resolved.
I believe it has to do with the kind of session Jenkins uses to connect to the Windows agent.
I encountered a similar scenario some time ago. I cannot explain the technical implications under these choices, but I can tell you what worked for me.
Configuring jenkins as a windows service did not work for me, but running the .jar as part of a startup script did work.
I could not see what jenkins was doing on the VM when I connected via RDP. I had to use TightVNC to see it interacting via the browser.
This behaviour is NVIDIA specific due to the limitations on memory transfers in CUDA implementation. CUDA docs, PyOpenCL issue. Internally, OpenCL calls to CUDA API, thus the limitations propagate to OpenCL.
Firstly, host allocated memory can be of two types: pinned(non-paged) and non-pinned(paged), whereby only pinned memroy (that is stored only in RAM and not offloaded to disk) transfers can be performed non-blocking or asyncronously in CUDA.
Secondly, if the memory is paged, CUDA first copies it to a pinned buffer and then transfers to device memory and the whole operation is blocking, see StackOverflow Answer. Supposedly, this explains such a long copy time of the first transfer.
In order to use asyncronous memory transfers and kernel execution, only pinned memory must be used. To use pinned memory, it has to be allocated by OpenCL itself.
Arrays created by Numpy are usually created with paged memory and Numpy has no functionality to explicitly use pinned memory.
To create an array with pinned memory, numpy arrays should be created using a buffer allocated by OpenCL.
The first step is to create a Buffer:
buffer = cl.Buffer(ctx, cl.mem_flags.READ_WRITE|cl.mem_flags.ALLOC_HOST_PTR, size=a.size)
This allocates memory on both host and device. The ALLOC_HOST_PTR
flag forces OpenCL to allocate pinned memory on host. Unlike with the COPY_HOST_PTR
flag, this memory is created empty and is not tied to an existing Numpy array.
Then, the buffer has to be mapped to a Numpy array:
mapped, event = cl.enqueue_map_buffer(queue, buffer, cl.map_flags.WRITE, 0, shape=a.shape, dtype=a.dtype)
mapped
is a Numpy array that then can be used conventionally in Python.
Finally, the mapped array can be filled with data from target array:
mapped[...] = a
Now, running the same benchmark shows non-blocking behaviour:
import numpy as np
import pyopencl as cl
from timeit import default_timer as dt
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
a = np.random.random((1000, 1000, 500)).astype(np.float64)
mf = cl.mem_flags
start = dt()
size = a.size * a.itemsize
a_buff = cl.Buffer(ctx, mf.READ_WRITE | mf.ALLOC_HOST_PTR, size=size)
a_mapped, event = cl.enqueue_map_buffer(queue, a_buff, cl.map_flags.WRITE, 0, shape=a.shape, dtype=a.dtype)
a_mapped[:] = a
cl.enqueue_copy(queue, a_buff, a_mapped, is_blocking=False)
print(f'Buffer creation time: {dt()-start:0.4f} s')
start = dt()
event1 = cl.enqueue_copy(queue, a_buff, a_mapped, is_blocking=True)
print(f'Copy time blocking 1: {dt()-start:0.4f} s')
start = dt()
event2 = cl.enqueue_copy(queue, a_buff, a_mapped, is_blocking=False)
print(f'Copy time non-blocking (Host to Device): {dt()-start:0.4f} s')
start = dt()
event3 = cl.enqueue_copy(queue, a_mapped, a_buff, is_blocking=False)
print(f'Copy time non-blocking (Device to Host): {dt()-start:0.4f} s')
Result:
Buffer creation time: 1.8355 s
Copy time blocking 1: 0.3096 s
Copy time non-blocking (Host to Device): 0.0001 s
Copy time non-blocking (Device to Host): 0.0000 s
PS: as you can see, having non-blocking functionality changes the underlying memory allocation. It would require refactoring of all array creation routines, which means it cannot be implemented 'on top' without significantly changing source code.
For those still facing the issue, I have managed to resolve it by disabling the K2 Mode in the Kotlin Settings
https://stackoverflow.com/a/66173517/23253438
This worked for me to get everything up and running in my local environment. However I am still facing issues when trying to run a bitbucket pipeline.
Where did you set this parameter
<argLine>${argLine} -XX:PermSize=256m -XX:MaxPermSize=1048m</argLine>
The JoinedSubclassEntityPersister metadata is missing for latests Hibernate releases: https://github.com/search?q=repo%3Aoracle%2Fgraalvm-reachability-metadata+JoinedSubclassEntityPersister&type=code
Copying it from here and adding to the project as additional metadata solves the problem
I also opened https://github.com/oracle/graalvm-reachability-metadata/issues/589
without size it also works only in KDE and older GNOME (22.04) and not in actual GNOME (24.04)
public class Muster extends Application {
private static Stage primary = null;
public static void main(String[] args) {
launch(args);
}
@Override
public void init() throws Exception {
}
@Override
public void start(Stage primaryStage) {
primary = primaryStage;
show(true, "Dialog 1", false);
}
private static void show(boolean first, String title, boolean wait) {
VBox vBox = new VBox(10);
vBox.setPadding(new Insets(25));
vBox.setAlignment(Pos.CENTER);
Button btnWait = new Button("Dialog wait");
btnWait.setOnAction(a -> show(false, "Wieder einer", true));
Button btn = new Button("Dialog");
btn.setOnAction(a -> show(false, "Wieder einer", false));
vBox.getChildren().add(new Label("Das ist ein Dialog"));
vBox.getChildren().addAll(btnWait, btn);
for (int i = 0; i < 10; ++i) {
HBox hBox = new HBox(10);
for (int ii = 0; ii < 10; ++ii) {
Text text = new Text("Nummer: " + ii);
Font font = Font.font("Verdana", FontWeight.BOLD, 16);
text.setFont(font);
hBox.getChildren().add(text);
}
vBox.getChildren().add(hBox);
}
try {
Stage stage = first ? primary : new Stage();
Scene scene = new Scene(vBox);
stage.setScene(scene);
stage.setTitle(title);
if (!first) {
stage.initOwner(primary);
}
stage.requestFocus();
stage.toFront();
if (wait) {
stage.showAndWait();
} else {
stage.show();
}
} catch (final Exception exc) {
System.out.println(exc.getMessage());
}
}
}
The solution seems to have been to just turn the computer off and on again several times then write the entire program out again by hand.
I still don't know why the error happened, but I'm suspecting that there may have been either something running in the background that was messing things up or some hidden symbol that I had accidentally copied from one of the references I was using that did it.
Either way, I guess this is solved? Thanks for everyone who responded
After declaring the hub layer type this:
hub_layer = tf.keras.layers.Lambda(hub_layer)