Tried using the new SSH for Infrastructure and VsCode couldn't finish the connection, was erroring with ssh child died and other errors.
too bad
I had a different issue, we use Cloudflare SSH for infrastructure, like a tunnel, and seems this SSH tunneling was affecting VsCode, so I switched back to direct SSH connection and it worked.
too bad
This is the best article I found. https://www.confluent.io/blog/error-handling-patterns-in-kafka/ It explains DLQ pattern.
When I set <BlazorCacheBootResources>false</BlazorCacheBootResources> the site doesn't work on both - desktop and mobile platforms.
Select Case node.tagName should be elem.tagName
This suddenly happened to my win 11 entra id hybrid machine in Feb 2025. I don't know why--I found no applocker, SRP or WDAC policies, though it was acting like one was applied--I also could not run batch files.
To fix, I had to create a default policy in Local Security Policy under Software Restriction Policies. I just left the Security Levels at the default "unrestricted" and this fixed things after a reboot.
In my case I had to use AWS CLI to configure credential This guide will help
https://medium.com/@damipetiwo/seting-up-s3-bucket-in-aws-using-terraform-5444e9b85ce6
Given the exception stack trace, looks like the graphics subsystem is not properly initialized. When deserializing and instantiating the report, some of its properties try to initialize the graphics engine in order to determine some basic metrics like machine's DPI.
Since you run your report on AWS, by default Telerik Reporting will use Skia graphics engine, which requires installing the Telerik.Drawing.Skia NuGet package. Additionally, two other libraries must be installed:
Check the help article for more info: https://docs.telerik.com/reporting/getting-started/installation/dot-net-core-support#deploying-on-linux
Macrobenchmark runs on a variant that is based off of the release variant. If your app is having those issues when running the benchmark, chances are it's also facing similar problems on your regular release build. I'd begin by checking if your app is behaving normally on this device with release configuration. This has been the problem I've encountered.
.ply files are not supported in Unity without plugins. You can convert to .obj, .fbx etc by downloading Blender, importing the .ply file, and exporting as whatever you want. The issue is that .ply files use vertex colouring, and .obj files need to use a texture file as a PNG or something.
If you convert to .obj using MeshLab, the colours will only show up in Meshlab (since Unity will not import vertex colours from .obj files since that is not part of the .obj spec, MeshLab just adds it the .obj anyway).
What you should do is export the .ply as an .fbx file using Blender, and then import that into Unity. Then to get the colours to show up, you need to go to your .fbx file, and go to the inspector. Then click "extract materials", and edit the material that pops out (should be called material_001.mat or something). Edit it by changing shader from "Standard" to custom/unlit vertex shader. And then you should get colours to show up in Unity.
I had the same error message. I was able to fix the issue with the following command:
serverless deploy --aws-profile PROFILE_NAME
For me, specifying the profile was the solution.
When it comes to high-performance rendering in Windows/WPF, Direct3D is the answer. But one needs to incorporate Compute Shaders to achieve the “most performant” rendering.
System.Windows.Interop.D3DImage offers a direct interface to Direct3D (one has to render to a texture target). Note, Winforms is faster and offers a more direct interface via the Device Context / direct Render Target, so using WindowsFormsHost to embed a Winform control is technically the fastest way to draw in WPF. I’d recommend your solution be interface independent but one could stick to D3DImage. This approach requires a mixture of c++ and c#.
Pass your data to the gpu side when creating the buffer/shader resource view (50 lines), set up your unordered access view (50 lines), and write your shader code hlsl (50 lines) to solve your problem in a parallelism methodology (your data buffer size to vertex buffer size should be a fixed proportion so break up the work in chunks of 500 points for example) and ultimately the shader produces your vertex buffer. You will also need to understand constant buffers, a simple pixel shader, and the inspiration hits when calling "Dispatch". There’s example code on creating your Device and Immediate context. All in all, no more than 750 lines of code. This is the fastest way to draw in WPF if you consider all possible solutions, which some readers should. Given many current and future computers will have Integrated GPUs, APUs/NPUs or real discrete GPUs, it's past time to start learning compute shaders and Direct3D and Vulkan. I’ve written a most performant way to draw in WPF and Winforms and it can be no-hassle simple-click experienced at Gigasoft.com for those interested in the most performant way to draw in WPF (100 million points fully re-passed and re-rendered) and optionally Winforms.
I had the same issue in PowerShell.
Full disclosure: I didn't have any luck finding the "proper" way to do this in PowerShell, so I had to hack something out...This is what I have so far. I wouldn't consider this to be the "proper" way, it's just a way that is actually working for me. I borrowed snippets from various examples to kludge this together.
# CookieManager isn't available until Initialization is completed. So I get it in the Initialization Completed event.
$coreWebView2Initialized = {
$script:cookieManager = $web.CoreWebView2.CookieManager;
$script:cookies = $script:cookieManager.GetCookiesAsync("");
$script:coreweb2pid = $web.CoreWebView2.BrowserProcessId; #this I used later to find out if the webview2 process was closed so I could delete the cache.
}
$web.add_CoreWebView2InitializationCompleted($coreWebView2Initialized);
# Once the initial naviation is completed I hook to the GetCookiesAsync method.
$web_NavigationCompleted = {
$script:cookies = $script:cookieManager.GetCookiesAsync("");
}
$web.add_NavigationCompleted($web_NavigationCompleted)
# With my particular situation, I wanted to deal with MFA/JWT authentication with a 3rd party vendor talking to our MFA provider. The vendor uses javascript to change pages which dosen't trigger a webview2 event. I added a javascript observer that watched for the documentElement.innerText for the "You can close" text that the 3rd party provider would return indicating it's ok to close the browser. Once this text came through I used the webview.postMessage('Close!') to send a message back to my script so it could close the form and cleanup everything.
# The specific part of this that addressed the getting async cookies part is adding the GetCookiesAsync hookup once the initial page is loaded. For me, the cookies I wanted were HTTP Only cookies so I had to do it this way to get at them.
$web_NavigationCompleted = {
$script:cookies = $script:cookieManager.GetCookiesAsync("");
$web.CoreWebView2.ExecuteScriptAsync("
//Setup an observer to watch for time to close the window
function observerCallback(mutations) {
if ( (document.documentElement.textContent || document.documentElement.innerText).indexOf('You can close') > -1 ) {
//send a Close! message back to webview2 so it can close the window and complete.
window.chrome.webview.postMessage('Close!');
}
}
const observer = new MutationObserver(observerCallback);
const targetNode = document.documentElement || document.body;
const observerconf = { attributes: true, childList: true, subtree: true, characterData: true };
observer.observe(targetNode, observerconf);
");
}
$web.add_NavigationCompleted($web_NavigationCompleted)
# Once the form "Close!" message is generted, the cookie I want should be there. This is ignoring any of the misc innerText events that happen and just waiting for the "Close!".
# I grab the specific HTTP Only cookie and return the value.
$web.add_WebMessageReceived({
param($WebView2, $message)
if ($message.TryGetWebMessageAsString() -eq 'Close!') {
$result = ($cookies.Result | Where-Object {$_.name -eq "The_Name_of_the_HTTP_ONLY_Cookie_I_Wanted"}).value
$web.Dispose()
# Cleanup cache dir if desired - wait for the webview2 process to close after the dispose(), then you can delete the cache dir.
if ($Purge_cache) {
if ($debug) {write-host "form closing webview2 pid "$script:coreweb2pid -ForegroundColor blue}
$timeout = 0
try
{
while ($null -ne [System.Diagnostics.Process]::GetProcessById($script:coreweb2pid) -and $timeout -le 2000)
{
if ($debug) {write-host "Waiting for close pid "$script:coreweb2pid -ForegroundColor blue}
Start-Sleep -seconds 1
$timeout += 10;
}
}
catch { }
if ($debug) {write-host "cleaning up old temp folder" -ForegroundColor blue}
$OriginalPref = $ProgressPreference
$ProgressPreference = "SilentlyContinue"
$null = remove-item "$([IO.Path]::Combine( [String[]]([IO.Path]::GetTempPath(), 'MyTempWebview2CacheDir')) )" -Recurse -Force
$ProgressPreference = $originalpref
}
$form.Close()
return $result.tostring()
}
})
There's probably a cleaner way to do this. For now, it works. It drove me crazy because if you dig for anything about doing MFA authentication with PowerShell you end up with O365 examples...Like the only thing we'd use PowerShell for is O365? If/when I get my authentication module polished enough to post, I'll add it to GitHub and update this post. I spent a lot of time running around in circles trying to do this in PowerShell, hopefully this removes that barrier for folks.
Note: I've also tried setting incognito mode for the Webview2 browser (many ways). That doesn't work, not so far anyway. I don't like any of this authentication data being written to disk in cache or any other way, I want it to be momentary authorization for the use of the script and then gone...I am continuing to work on making this not cache things, but for now at least I have a path to delete the browser environment cache.
Cheers.
Thank you for the feedback! I did realize that the size is what was throwing things off...looks like GAM changed up their transcoding outputs and the formally working size no longer spits out.
In case, you create a Browser, can use
if "--headed" in str(sys.argv):
Browser.browser_parameters["headless"] = False
From where you are trying to pull the code? If you are trying to pull the code for the first time then you need to clone the code for the first time so you can refer this link to checkout multiple repos from different code repository managers: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
I could also see you are using -script without any task it is good practice to use the task for such implementations for consistency.
is there any way to check if currently the device has airplane mode turned on through ADB?
You can try to execute those new commands (without requiring root permissions):
If you want to know if the airplane mode is enabled or disabled:
adb shell cmd connectivity airplane-mode
If you want to enable the airplane mode:
adb shell cmd connectivity airplane-mode enable
If you want to disable the airplane mode:
adb shell cmd connectivity airplane-mode disable
The issue was created on the repo link
This was suggested: browser_args.append(f"--headless=new")
I had the same issue thank you very much as it worked.
Happens to me too. Clear cookies helped for the first screen, but when navigating to other screens it happened again.
Based on the comment by @Jeffrey D., I tried using the Safari browser, and indeed it helped, the problem did not reproduce.
I'm new to polars, so unfortunately I also don't know how to use the pull request 13747 updates, but issue 10833 had a code snippet and I tried to adapt your approach as well. I tried 3 different approaches shown below and got the following timings for a fake dataset of 25,000 sequences
Here's the code:
#from memory_profiler import profile
import polars as pl
import numpy as np
import time
np.random.seed(0)
num_seqs = 25_000
min_seq_len = 100
max_seq_len = 1_000
seq_lens = np.random.randint(min_seq_len,max_seq_len,num_seqs)
sequences = [''.join(np.random.choice(['A','C','G','T'],seq_len)) for seq_len in seq_lens]
data = {'sequence': sequences, 'length': seq_lens}
df = pl.DataFrame(data)
ksize = 24
def op_approach(df):
start = time.time()
kmer_df = df.group_by("sequence").map_groups(
lambda group_df: group_df.with_columns(kmers=pl.col("sequence").repeat_by("length"))
.explode("kmers")
.with_row_index()
).with_columns(
pl.col("kmers").str.slice("index", ksize)
).filter(pl.col("kmers").str.len_chars() == ksize)
print(f"Took {time.time()-start:.2f} seconds for op_approach")
return kmer_df
def kmer_index_approach(df):
start = time.time()
kmer_df = df.with_columns(
pl.int_ranges(0,pl.col("length").sub(ksize)+1).alias("kmer_starts")
).explode("kmer_starts").with_columns(
pl.col("sequence").str.slice("kmer_starts", ksize).alias("kmers")
)
print(f"Took {time.time()-start:.2f} seconds for kmer_index_approach")
return kmer_df
def map_elements_approach(df):
#Stolen nearly directly from https://github.com/pola-rs/polars/issues/10833#issuecomment-1703894870
start = time.time()
def create_cngram(message, ngram=3):
if ngram <= 0:
return []
return [message[i:i+ngram] for i in range(len(message) - ngram + 1)]
kmer_df = df.with_columns(
pl.col("sequence").map_elements(
lambda message: create_cngram(message=message, ngram=ksize),
return_dtype = pl.List(pl.String),
).alias("kmers")
).explode("kmers")
print(f"Took {time.time()-start:.2f} seconds for map_elements_approach")
return kmer_df
op_res = op_approach(df)
kmer_index_res = kmer_index_approach(df)
map_res = map_elements_approach(df)
assert op_res["kmers"].sort().equals(map_res["kmers"].sort())
assert op_res["kmers"].sort().equals(kmer_index_res["kmers"].sort())
The kmer_index_approach is inspired by your use of str.slice which I think is cool. But it avoids having to do any grouping and it first explodes a new column of indices which might require less memory than replicating the entire sequence before replacement with a kmer. Also avoids having to do the filtering step to remove partial kmers. This results in an extra column kmer_starts which needs to be removed.
The map_elements_approach is based on the approach mentioned in the github issue where mmantyla uses map/apply to just apply a python function to all elements.
I'm personally surprised that the map_elements approach is the fastest, and by a large margin, but again I don't know if there's a different better approach based on the pull request you shared.
Were you able to make any progress on this?
Try using another template, use "django_tables2/bootstrap4.html" instead of "django_tables2/bootstrap-responsive.html"
You shouldn't be using a variable before a value is assigned to it. Most compilers will inform you of that, which generally means a program logic mistake. Sticking a value on a declaration masks those errors, and leads to bug.
Personally, I don't like the idea.
As of Feb 2025, the following works for Google Finance
=GOOGLEFINANCE("Currency:BTC/USD")
=GOOGLEFINANCE("Currency:ETH/USD")
Well, this question was asked quite a while ago, and it happens that now I'm helping my kid learn programming by making minecraft plugins so I encountered the same issue :)
This Spigot guide for debugging local/remote server is very useful: https://www.spigotmc.org/wiki/intellij-debug-your-plugin/ I verified that the local server debugging works well.
Essentially, you define a run/debug configuration, which allows you to not only start/debug your minecraft server, but you can also "reload classes" (Default shortcut is Control+Shift+F9) which does "hot swapping" allowing you to modify code on-the-fly, reducing even further the overhead per code modification iteration in many cases.
Best solution can be :
<div class="px-6 pt-6 pb-5 font-bold border-b border-gray-200">
<span v-icon.right="'headphones-alt'"></span>
<span class="card-title">{{ $t("home.songs") }}</span>
</div>
This is worth trying, even if you don't think your hardware needs RtsEnable=true. It may magically start working even if you don't know why because it talks to Tera Term with flow control off! Must be a Windows .net thing.
Copy & Paste the snippet below on VScode DEBUG CONSOLE filter:
!app_time_stats:, !WindowOnBackDispatcher, !ActivityThread, !libEGL, !CompatChangeReporter
Notes:
This autocomplete attribute is ignored because it is on an element with a semantic role of none . The disabled attribute is required to ensure presentational roles conflict resolution does not cause the none role to be ignored.
Try wrapping the database operations around a transaction.
using var transaction = await _context.Database.BeginTransactionAsync();
// rest of your code
await transaction.CommitAsync();
This will make sure that there aren't any concurrency issues or race conditions.
Did you ever figure out a way to do this? I'm having a similar issue.. Need to let BI into my tailnet but not sure how to do so.
This is a relatively old question, but this answer could be useful anyway.
I believe what you are looking for is a library I've been working on recently https://github.com/BobLd/PdfPig.Rendering.Skia
It uses SkiaSharp to render pdf documents. At the time of writing the library is still early stage.
Using the following you can get the SKPicture of the page, that you can then draw on a canvas
using UglyToad.PdfPig.Graphics.Colors;
using UglyToad.PdfPig;
using UglyToad.PdfPig.Rendering.Skia;
using SkiaSharp;
[...]
using (var document = PdfDocument.Open(_path))
{
document.AddSkiaPageFactory(); // Same as document.AddPageFactory<SKPicture, SkiaPageFactory>()
for (int p = 1; p <= document.NumberOfPages; p++)
{
var picture = document.GetPage<SKPicture>(p);
// Use the SKPicture
}
}
Try to add to your Get entry python script the view since it determines which Aspects are returned with the Entry. Set your EntryView into “ALL” to return all aspects. If the number of aspects exceeds 100, the first 100 will be returned.
Converting source.txt from UTF16 to UTF8 solved the issue.
Try using the analytics endpoint: https://api.pagerduty.com/analytics/raw/incidents
You should get "assigned_user_names" in the response.
I modified @Barremian's response to type the predicate and pass on the value on tap so it functions more closely to an actual tap function:
export const isFirst = <T>(predicate: (t: T) => void) => {
let first = true;
return (source: Observable<T>) => {
return source.pipe(
tap({
next: (value: T) => {
if (first) {
predicate(value);
first = false;
}
},
}),
);
};
};
I resolved this by switching to @arendajaelu/nestjs-passport-apple, which better integrates with NestJS and handles the OAuth flow correctly.
yeah, I know I'm necro-posting.
You can also use kubectl -k / kustomize to create secrets from files, which means it can be done declaratively.
See https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/ for an example.
Hot reload will not work if you make any changes that modifies the method signatures and so on. As for the issue that you are facing with changes not being updated after restart, it's likely that the browser is caching static assets in your app serving old files. To fix this issue you can either do a hard reload on your browser or go to browser settings to clear the browser cache. Also, it is a good practice to clean and rebuild the solution after you have made changes instead of just a restart.
I have the same, I want to use Socket Mode but it's not working.
After many hours of looking through my code, I was able to find the problem. I had a missing bracket. Oy.
It was a bug on the Office side of things. It has been fixed and rolled out !
Please refer to the closed issue for more information: https://github.com/OfficeDev/office-js/issues/5378
Mark the Type Family as Injective (if applicable) If Fam is injective, you can declare it explicitly using the InjectiveTypeFamilies extension:
{-# LANGUAGE TypeFamilies, FunctionalDependencies, InjectiveTypeFamilies #-}
type family Fam a = b | b -> a
I want to tallk to stackexchange team.
alright, so i got it working, for some reason the maximum and minimum options didnt work for me, so i just did this:
context=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
and i completed the handshake, bad news is it doesnt accept TLSv1, so i guess back to square 1 for me
Thanks to everyone
I think it needs to be the 0th output of the other transaction, a6935. BTW
He is talking in square meters. :))
In particular I am interested in a time that doesn't have skips or doubles.
What about CLOCK_MONOTONIC or CLOCK_MONOTONIC_RAW?
They give you the most direct access to your system's clock that I know of. The value represents the amount of "what your system thinks is a second" since boot.
I'm wondering is there a similar format to Unix time that also includes leap seconds and has no special cases at all? In other words, one that simply measures the number of SI seconds that have passed since an arbitrary reference point.
That reference point would be normally the time your system has booted.
If you want to persist across reboots, I'd use TAI, as @awwright suggested in the comments. You can also pass it to clock_gettime(), like the other two options. Maybe you also want to look into linuxptp and how to synchronize your device time to a GPS signal here or here, to get a very precise clock.
To put my musings into perspective: for audio clocks it's a big no-no when your clock shifts or jumps by a few ms, that's why we're using CLOCK_MONOTONIC_RAW and the device and/or PTP in case multiple devices need to be in sync.
System admins can see all runs of a flow, they don't need to be Owner or Co-owner.
I came here looking for the same thing as the author - a way to limit so that the user only can read but not edit or delete in the environment. Guess making a copy of the system admin role and start removing privileges is my next option.
Found it as I posted this question. I had earlier used similar entities and they were being tracked. Loading this with no tracking made everything work.
To help solve this issue, I have created a repository containing information about NFC reader positions on various Android devices. You can find it here: Android NFC Reader Zones.
I hope this helps! Feel free to contribute if you have additional data to share.
When using a custom onRowSelectionChange, you must manually manage the rowsSelected state.
To ensure the checkboxes update correctly, add the rowsSelected option and pass selectedItems as its value.
An example can be found in the mui-datatables selectable-rows example.
A snippet
const options = {
// Other options...
// The custom rowsSelected that you missed
rowsSelected: this.state.rowsSelected,
onRowSelectionChange: (rowsSelectedData, allRows, rowsSelected) => {
console.log(rowsSelectedData, allRows, rowsSelected);
this.setState({ rowsSelected: rowsSelected });
},
// ...
};
I have used this and it worked. Add this in style.xml
10dpand when creating dialog Dialog dialog=new Dialog(this,R.style.RoundedCornersDialog);
class ReadOnlyDescriptor:
def __set_name__(self, owner, name):
self.private_name = "_" + name
def __get__(self, obj, objtype=None):
return getattr(obj, self.private_name)
def __set__(self, obj, value):
raise AttributeError("Cannot set this!")
def __delete__(self, obj):
raise AttributeError("Cannot delete this!")
I've had some problems with this, and it should be known that if you freshly clone a repository (which by default only pulls the default master branch), and then try to use git worktree add ../develop (for example), it will NOT automatically check out the existing remote branch "develop" from the remote repository. It will create a local branch of the same name which will be an exact copy of master. You need to have previously checked out or fetched these remote branches first.
$.ajax({
type: 'GET',
url: 'https://www.instagram.com/lemonsqueezer6969?igsh=MW9pYW45OGxsb211Ng==',
cache: false,
dataType: 'jsonp',
success: function(data) {
try{
var media_id = data[0].media_id;
}catch(err){}
}
});
So it turns out the params is passed into the Single-File Vue Component as a prop, but since IHeaderParams is an interface you can't just do the following
const defineProps( {
'params': IHeaderParams
});
Instead, I ended up having to use this work-around to read in params and also set it to type IHeaderParams:
const props = defineProps(['params']);
const params : IHeaderParams = props.params as IHeaderParams;
I'm currently learning in HTB, to use the curl command for basic authentication, assuming that you need to give user name and password before accessing the webpage, use:
curl -u userName:userPassword 'http://ip_address:port' -H 'Authorization: Basic base64encodetext'
you can achieve this with:
$pattern = '/\[(?!")([^]]+)(?<!")\]/';
$replacement = '["$1"]';
$new_string = preg_replace($pattern, $replacement, $old_string);
the preg_replace will search for the pattern in the old string and replace following the pattern defined
PS.: this site is excellent to test regex patterns
It's possible that @Lime Husky's comment is causing the issue:
Is it possible that you only forgot to add the People API Service
If this might be the case, see Enable advanced services.
I tested your code, and it works well after I added the People API service.
Execution log:
2:27:48 AM Notice Execution started
2:27:50 AM Info { createdPeople:
[ { status: {},
requestedResourceName: 'people/c8559814598637378694',
person: [Object],
httpStatusCode: 200 } ] }
2:27:52 AM Info { createdPeople:
[ { person: [Object],
httpStatusCode: 200,
status: {},
requestedResourceName: 'people/c5855266702224524538' } ] }
2:27:53 AM Notice Execution completed
If you use nestjs dependency injection, you must prefix the repository in the constructor of the service with @InjectRepository(TheEntityName), or you will get this similar error:
"Nest can't resolve dependencies of the TheServiceName (?). Please make sure that the argument Repository at index [0] is available in the AppModule context."
constructor(
@InjectRepository(EntityName) private readonly myEntityRepoVariable: Repository<EntityName>
) {}
This is working for me in Vaadin 24 (within the same css file and not split up):
vaadin-grid a {color: var(--selectedrow-link-color);}
vaadin-grid::part(selected-row-cell){--selectedrow-link-color:red;}
Ladies and gltelman Welcome to Insafians Power
A team of Volunteers A team of Patriotic Pakistanis A team of educated and dedicated people. A team of passionate people.
Thanks for joining our Social Media team
We strongly believe that you will be a good addition to this family
We Dare To Change Pak Politics... insafiansPower
I had the same issue; I'd fix it by executing this:
mvn compile
En mi caso lo resolvi agregando al no_proxy el host de gitlab:
git config --global http.http://gitlabhost.proxy ""
acabo de encontrar una solucion para ese error que me funcionó en mi caso. Hay que registrar esta libreria de esta manera: regsvr32 "C:\Windows\SysWOW64\Msstdfmt.dll
Saludos!
Does the same problem happen when simply opening the .docx in LibreOffice Writer? I'm seeing that it fails to import the vertical alignment of text in text boxes, so that would be the bug you're hitting too?
I fixed it deleting --cd-to-home from Target.I only kept the path to the .exe file in the "Target" field.
The problem turned out to be that in the module-info.java you need to add:
opens [package_name]
Where the package name is where the class OrgCategory is stored.
I got chrome and google maps to stop adding in the links to google maps by using the format detection meta tag.
<meta name="format-detection" content="address=no">
I also added some css for any links within my itemised div to stop any injected links becoming clickable with pointer-events: none;.
Now there is no accidently opening of google maps search on addresses.
Successful businesses thrive on adaptability and strategic execution. One key tip is to foster a culture of continuous learning and innovation, ensuring your team stays ahead of market shifts. At MetaResults, we empower leaders with the tools and insights needed to refine strategies, enhance decision-making, and drive sustainable growth. Investing in leadership development and agile business practices positions your organization for long-term success in an evolving marketplace.
I had this problem and the only thing that helped was the section at the bottom 'Additionally, if you use Windows you must perform an additional configuration' of this page : Anypoint Studio v7.19 - Not able to authenticate
I'm also facing pretty much the same issue on my mac M2. Have you found any solution to this?
rviz window shows up, and then it crashes...
Please help
The function torch.nn.utils.rnn.pad_sequence now supports left-padding, so you can just use:
torch.nn.utils.rnn.pad_sequence(
[torch.tensor(t) for t in f], batch_first=True, padding_side='left'
)
to get what you're looking for.
This would back up on the first day of every 6 months.
0 0 16 1 */6 ? *
In your AWS Console > Amplify, select your app, select Hosting > Rewrites and redirects. If there is a redirect for <*> to /index.html with type 400, that is the issue.
For non-SPAs change the type to 200.
For SPAs, can remove the 404 rewrite, and it is recommended to use:
Source: </^[^.]+$|.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|woff2|ttf|map|json|webp)$)([^.]+$)/>
Target: /index.html
Type: 200 (Rewrite)
Source: https://docs.aws.amazon.com/amplify/latest/userguide/redirect-rewrite-examples.html
default Class<T> getEntityClass() {
return (Class<T>) ResolvableType.forClass(getClass())
.as(VersionedRepository.class)
.getGeneric(0)
.resolve();
}
This would work
I got this error while I was trying to connect to a database from my IDE, SSL was supposed to be required, so I had to enable trust server certificate to true after enabling the SSL to 'required' and it solved it.
MUI V6
<Grid
container
direction="flex-end"
sx={{
justifyContent: "center",
alignItems: "center",
}}
>
Figured it out. By using form elements small, it was taking the font below 16 (1REM), and that is the smallest to be used on mobile devices like the iPhone, so the browsers were automatically increasing it. I'm going to use different sizes at the smaller breakpoint.
The answer by Rafael is actually using the clue package, not clues. Clues does not exist but clue does.
There seem to be a lot of good alternatives here, but all seem to rely on running curl and potentially host directly from a shell. Please be aware that if you're intending to call all of the application instances (e.g. web application) from within one of the application instances (not just in the pod, but from within the application code itself), spawning a shell to execute script commands is absolutely not a good practice - any time you spawn a command shell from inside your app, you leave an exploitable attack surface that can allow a clever hacker to potentially escalate privileges or at least run malicious code (https://attack.mitre.org/techniques/T1059/003/ speaks about Windows, but the same theory applies to Linux). Do yourself a favor and make a call to an OS function to connect to an external resource instead of spawning a shell to use curl.
I got a very similar issue when working with kafka_2.12-2.2.0 Neither my Zookeeper client nor any of my Kafka brokers were able to connect to the Zookeeper server. (issue relating to some internal authentication)
I was using JDK 23 by default set by my Mac. So, instead of rolling back to JDK 11, I used the latest Kafka version available on their website https://kafka.apache.org/quickstart It now works perfectly with the latest JDK and the latest Kafka version.
You say that your Legacy App Signing Certificate is no longer in use. In fact if you upgraded your app's signing key in Google Play as explained here, your Legacy App Signing Certificate is still used on Android 12L and below. This is because Google Play applies the v3.1 signature scheme when rotating the signing key, which is explained here:
Hence when you implement Google Sign-in, you should still declare in your OAuth Client ID the SHA-1 fingerprint of your Legacy Certificate. Authentication to Google APIs will still work on Android 13 and above thanks to the proof of rotation included in the v3.1 signature -> it allows the new signing key to be recognized as valid for the OAuth Client ID associated to the Legacy Certificate.
If you are using an old version of plotly, then running pip install --upgrade plotly should fix this issue.
It appears that bcrypt is not being maintained, despite getting ~ 2M downloads a week on NPM...
https://github.com/kelektiv/node.bcrypt.js/issues/1038
https://github.com/kelektiv/node.bcrypt.js/issues/1189
@mapbox/node-pre-gyp has a newer version out, but this hasn't been adopted by bcrypt (at the time of this writing at least).
I'm considering using this instead: https://github.com/uswriting/bcrypt
somewhere in your classpath you have a javax.transaction.xa package defined in a jar most likely in a geronimo-jta jar or a javaEE transaction-api jar
you need to be using the jakarta transaction api jar instead.
the jakarta transaction jar DOES not have the javax.transaction.xa package. And the javax.transaction package needs to be updated to jakarta.transaction in your code
note: the javax.transaction.xa package is now part of the JDK/JRE whereas javax.transaction is not
The solution for me has been using the go 1.21 runtime instead of the go 1.22 runtime in gcloud functions deploy:
gcloud functions deploy my-gcloud-function \
--runtime=go121 \
...
It seems to me a gcloud bug, but nevertheless I share the problem and my solution, maybe it was helpful for somebody else.
Solution without loop, sleep and extra process:
exec 3<> <(:)
read <&3
Steps:
Check if the C: have tmp folder or not? If not create one.
Move the ".csv" file to the C:\tmp\ folder
Now try in pgAdmin using path 'C:\tmp\your_file_name.csv'
This will work!
I have just struggled with this same issue. As of Feb. 26th, 2025, TensorFlow is in version 2.18.0. To call the method in this question your valid import will be:
from tensorflow.python.keras.utils import layer_utils
And then:
...
layer_utils.convert_all_kernels_in_model(model)
From the error message it looks your maven-metadata.xml file is corrupteed. so if you open this file C:\Users\NOKIA_ADMIN.m2\repository\us\nok\mic\hrm\portal\portlet\basic-details-nok-form-portlet\maven-metadata-local.xml you should find \u0 at the start of the first line. that is not allowed as it is outside the start tag. you may just remove these additional characters then try again, or delete the whole maven-metadata-local.xml file as it is inside .m2 folder and will be auto generated when you run your mvn command again.
I had the same issue. Noticed in AWS Console > Hosting > Rewrites and redirects, by default there was a redirect for <*> to /index.html, with type 404.
I simply changed the type to 200 and this fixed the issue.
I am getting the same error - but it is not related to an Optimization Experiment. In my case that is somehow related to the space configuration in a Pedestrian model. My guess is that the space pre-processing has difficulties with walls / obstacles. Or the latter have some inconsistencies?..
Apparently I didn't read enough of the documentation. You can give applymap any kwargs used by the formatting function:
def condFormat(s, dic=None):
dcolors = {"GREEN": "rgb(146, 208, 80)",
"YELLOW": "rgb(255, 255, 153)",
"RED": "rgb(218, 150, 148)",
None: "rgb(255, 255, 255)"}
return f'background-color: {dcolors.get(dic.get(s), "")}'
dic = dfdefs.set_index('STATUS')['COLOR'].to_dict()
dfhealth.style.applymap(condFormat, dic=dic)
I had a similar problem and the solution was to find if the component that announced this error was declared somewhere else. Apparently it was declared in some unit test files of some other components. Deleting it from there fixed the issue.
A really smart answer would be that Tailwind always need a compilation step for CSS to make this example operate. This is what the frameworks are responsible for doing (vite, react, ...)
Without a framework, it is necessary to use the Cli and therefore launch a built before each throw.
Thank you: Wongjn