Ohm.js does do this perfectly well: https://ohmjs.org/docs/patterns-and-pitfalls#operator-precedence. Just make sure you avoid ambiguous recursion.
We dont see any namespaces in your code
Issue is that u have namespace and class named the same
*looking for a way to solve that, alowing to keep the same name
The problem arose from the port forwarding via VSCode from my local machine to the remote Postgres database server. I switched to the ssh command line client and now the connections are closed properly.
The issue here was with security groups when i added additional rule to allow traffic from anywhere with in the vpc cidr range. It started working
I suppose you are using Inertia/React and <Link> component.
There is actually an easier way to bypass Cors. Place <a> tag instead of <Link> tag in your template.
<a href="/auth/social/{provider}">
You can try the Bricks Forge Pro Onboarding plugin, as it closely aligns with your requirements. If you need any customizations, I'd be happy to assist you.
It turned out my code was just fine. It turned out the MudBlazor component is still in development.
I had a similar problem, on generic template the rows were not visible on the card itself, but they were visible in the details section.
And it was due to the class id suffix, and object id suffix, those have to be different all the times (if you do create them, as I was doing using CreateJWTNewObjects).
Also if you want to show hide the details section of the card, you have to add DetailsTemplateOverride
Check out Mirrorfly. We recently used for our client with a similar scenario, also check this blog they used mirrorfly for reference
Yes, a real headache:
If you use the PHP-FPM, we need to modify the settings in the /opt/bitnami/apps/phpmyadmin/conf/php-fpm/php-settings.conf file:
php_value[upload_max_filesize]=80M php_value[post_max_size]=80M
I don't think it is possible to filter for properties of products, as they are not joined on the query, so the data is not available. See https://github.com/shopware5/shopware/blob/5.7/engine/Shopware/Components/Api/Resource/Article.php#L273-L276
So without adding a custom API route, it does not seem to be possible to filter products based on their properties.
def nested_sum(lst): total = 0 for item in lst: if isinstance(item, list): total += nested_sum(item) # Recursive call for nested list else: total += item # Add integer value return total
nested_list = [1, [2, 3], [4, [5, 6]], 7] print("Sum:", nested_sum(nested_list))
First you have to disconnect the current repository. For this,use the command: git remote remove origin
Once that's done, you can link the new repository by running: git remote add origin https://github.com/UserName/repositoryName.git
Make sure to replace "YourUserName" with your actual GitHub username and "repositoryName" with the name of your repository.
Thanks to @jared
import numpy as np
# This line is given
pa = np.array([[ 213.00002 , 213.00002 ],[ -213.00002 , 213.00002 ],[ 213.00002 , -213.00002 ],[ -213.00002 , -213.00002 ]])
# This line is given
pa -= pa.min(0)
# RESULT
# [[426.00004, 426.00004], [ 0, 426.00004], [426.00004, 0 ], [ 0, 0 ]]
I found a solution to this issue and am sharing my findings:
With Xcode 16, swift-format is included as part of the Xcode toolchain, eliminating the need for external libraries and making Swift file formatting more convenient.
You can lint your Swift package without third-party dependencies by:
Package.swift fileswift-format from Xcode 16When your package builds, Xcode will show warnings for formatting issues like the inconsistent indentation in my example:
struct MyPackage {
var a: Int
var b: Int // This will show a warning in Xcode
var c: Int
}
The warnings are displayed directly in Xcode’s issue navigator during the build process.
For a comprehensive guide with step-by-step instructions and sample code, refer to this article: Linting a Swift Package with swift-format
Note that while the article references adding swift-format as a dependency, with Xcode 16 you can simplify this approach by using the built-in toolchain version instead.
A POST call against /api/order will trigger the creation of a new order, hence the missing required field errors. See https://shopware.stoplight.io/docs/admin-api/52ce9936f6ea4-create-a-new-order-resources
If you want to search for an order more specifically, you need to use the api/search/order endpoint. See https://shopware.stoplight.io/docs/admin-api/0b7d9d489b841-search-for-the-order-resources
Based on Solace topic taxonomy best practices, when you publish to a topic like sale/124, ideally "124" would be the id of the store. If you want to have this event applicable for multiple stores, then I would suggest a different topic taxonomy approach like a region identifier or some other common property which is shared amongst the stores 1,2 and 4.
This will allow each store to subscribe to the specific events they are interested in via queues.
Hope this helps.
I copy the fine tools/traffic_annotation/bin/linux64/traffic_annotation_auditor from another PC, and it works now.
I checked that it works under 100KB.
So, I changed whole-file loading to chunk loading with 16KB at once. then it works with 1.9MB!
now I'm trying optimize the chunk size
The reason behind it is the performance. Now what does DDL (Data Definition Language) even do? It gives you the "frame" for your DB it creates Tables with fields etc. Now you want this take effect immediatley and it's rather simple. Thats why you use an interpreter. Also your frame most likely wont change that often compared to DML. Now in DML its another story its challenging and difficult tasks which can't be executed immediatley hence compiler. A very very very simple rule of thumb (yes before everyone hunts me with pitchforks i know there is more to it) an Interpreter is efficient in the start. he acts faster initially but the longer it runs the more inefficient he gets compared to an compiler
Btw. you can also try https://github.com/tofuutils/tenv. It's a modern replacement for tfenv
Thanks to Charlieface, I was able to create the following:
using System;
using System.Management.Automation;
using System.Management.Automation.Runspaces;
using System.Collections.ObjectModel;
class Program
{
static void Main()
{
string remoteMachine = "RemoteMachine";
string username = "user";
string password = "password";
// WSMan connection info
PSCredential credentials = new PSCredential(username, new System.Security.SecureString());
foreach (char c in password) credentials.Password.AppendChar(c);
WSManConnectionInfo connectionInfo = new WSManConnectionInfo(
new Uri($"http://{remoteMachine}:5985/wsman"),
"http://schemas.microsoft.com/powershell/Microsoft.PowerShell",
credentials
);
// Create a runspace with the WSMan connection
Runspace runspace = RunspaceFactory.CreateRunspace(connectionInfo);
runspace.Open();
// Create the pipeline to execute the PowerShell command
using (PowerShell pipeline = PowerShell.Create())
{
// Command to execute remotely
pipeline.Runspace = runspace;
// Add the Get-NetIPConfiguration command to the pipeline
pipeline.AddCommand("Get-NetIPConfiguration");
pipeline.AddCommand("Select-Object").AddParameter("Property", new string[] { "InterfaceAlias", "IPv4Address", "InterfaceIndex", "InterfaceDescription", "NetProfile", "IPv4DefaultGateway", "DNSServer" });
try
{
Collection<PSObject> results = pipeline.Invoke();
// Process results
foreach (PSObject result in results)
{
string interfaceAlias = result.Members["InterfaceAlias"]?.Value.ToString();
string ipv4Address = string.Empty;
string ipv4PrefixLength = string.Empty;
string ipv4Gateway = string.Empty;
string dnsServers = string.Empty;
var ipv4AddressObject = result.Members["IPv4Address"]?.Value as PSObject;
if (ipv4AddressObject != null)
{
var baseObject = ipv4AddressObject.BaseObject as dynamic;
ipv4Address = baseObject[0]?.IPAddress;
ipv4PrefixLength = baseObject[0]?.PrefixLength.ToString();
}
// Accessing Default Gateway
var ipv4GatewayObject = result.Members["IPv4DefaultGateway"]?.Value as PSObject;
if (ipv4GatewayObject != null)
{
var baseGatewayObject = ipv4GatewayObject.BaseObject as dynamic;
ipv4Gateway = baseGatewayObject[0]?.NextHop;
}
// Accessing DNS Servers
StringBuilder DNSServers = new StringBuilder();
var dnsServersObject = result.Members["DNSServer"]?.Value as PSObject;
if (dnsServersObject != null)
{
var baseDNSServersObject = dnsServersObject.BaseObject as dynamic;
if(baseDNSServersObject.Count > 1)
{
foreach(var DnsServerObject in baseDNSServersObject)
{
var baseDNSServers = DnsServerObject?.ServerAddresses;
if (baseDNSServers is object[] DnsServers)
{
int? cnt = 1;
foreach (var DnsServer in DnsServers)
{
string? DNS = null;
if (cnt < DnsServers.Length)
{
DNS = DnsServer.ToString();
DNS = DNS + ";";
}
else
{
DNS = DnsServer.ToString();
}
DNSServers.Append(DNS);
cnt++;
}
}
}
}
else
{
var baseDNSServers = baseDNSServersObject[0]?.ServerAddresses;
if (baseDNSServers is object[] DnsServers)
{
foreach (var DnsServer in DnsServers)
{
string? DNS = DnsServer.ToString();
DNSServers.Append(DNS);
}
}
}
}
// Convert CIDR to Subnet Mask
string subnetMask = CidrToMask(Convert.ToInt32(ipv4PrefixLength));
// Output results
Console.WriteLine($"InterfaceAlias : {interfaceAlias}");
Console.WriteLine($"IPv4 Address : {ipv4Address}");
Console.WriteLine($"Subnet Mask : {subnetMask}");
Console.WriteLine($"Default Gateway : {ipv4Gateway}");
Console.WriteLine($"DNSServer : {DNSServers.ToString()}");
Console.WriteLine();
}
}
catch (Exception ex)
{
Console.WriteLine("Error executing the pipeline: " + ex);
}
}
// Clean up and close the runspace
runspace.Close();
}
public static string CidrToMask(int cidr)
{
var mask = (cidr == 0) ? 0 : uint.MaxValue << (32 - cidr);
var bytes = BitConverter.GetBytes(mask).Reverse().ToArray();
return new IPAddress(bytes).ToString();
}
I also had the same problem and deactivating conda worked for me
For anyone using Amazon AWS SMS please refer this link https://repost.aws/questions/QU4hNT5ZjeRYC_pm_6P7yLxw/certificate-authority-ca-for-apple-push-notification-apn-is-changing?utm_source=chatgpt.com
Summary : If you're using AWS SNS for push notifications, no direct action is likely needed — AWS automatically manages certificates and trust stores.
I've just spoken to them and on their Ignite hosting it seems that sendmail isn't supported. The guy I spoke to didn't even really know what sendmail was. So you have to have an email account and use the standard livemail smtp apparently.
check your gcc version By running these command on your terminal gcc --version
refer this documentation you will get proper output
https://filamentphp.com/content/tim-wassenburg-how-to-customize-logout-redirect
Probably the option --remove-on-error has been introduced later on?
No, it’s not possible to directly open a mail app in "read mode" due to security restrictions. You can provide webmail links (e.g., Gmail, Outlook) or guide users to open their mail app manually.
I believe you are using excel please edit the tags of your post and remove irrelevant tags
At the moment OpenApi 3.0(formally Swagger) is a widely used API description standard.
You can create a task using the below piece of code
CREATE TASK mytask_hour
WAREHOUSE = mywh
SCHEDULE = 'USING CRON 0 9-17 * * SUN America/Los_Angeles'
AS
CALL SP_LOAD_PKG_LAB_SKU_LIST();
https://docs.snowflake.com/en/sql-reference/sql/create-task
Once the task is created, you should resume the task, for the task to execute as per the schedule
ALTER TASK mytask_hour RESUME;
https://docs.snowflake.com/en/sql-reference/sql/alter-task#examples
EXECUTE TASK <name>;
Manually triggers an asynchronous single run of a task (either a standalone task or the root task in a task graph) independent of the schedule defined for the task.
https://docs.snowflake.com/en/sql-reference/sql/execute-task
SHOW TASKS;
will list the tasks and certain associated details https://docs.snowflake.com/en/sql-reference/sql/show-tasks
read -p "Hey dear user, let's play a game, : " answer int done = 0 while [[ $answer != "yes" ]]; do
read -p "guess a number from 10 to 100 : " answer
if [[ $answer == 70 ]]; then
echo "Congratulations! You guessed the number."
elif [[ $answern -lt 70 ]]; then
echo " dear user, u too close"
elif [[ $answer -gt 70 ]]; then
echo " u are to far, dear user"
if (done); break
fi
done
@push('scripts')
{!! $dataTable->scripts() !!}
@endpush
try using this.
Issue is the compatibility for light-dark CSS function, which Angular Material 19 Theming is based on. Check the compatibility table.
You can also read about this issue in this blog article.
It's a good practice to escape your code to prevent errors from interrupting your page execution. Put your code inside a block such as this:
try { .. } catch(Throwable $e) { $e->getMessage(); }
Creating .pylintrc only doesn't work for me, so you need to tell VSCode that the .pylintrc is located in your root project folder:
.pylintrc:[MESSAGES CONTROL]
disable=C0103
./.vscode/settings.json:{
"python.linting.pylintArgs": [
"--rcfile=./.pylintrc"
]
}
You can check the statement execution duration from the Snowflake.account_usage.query_history view by using the required filters
Sample query
select query_id, total_elapsed_time from snowflake.account_usage.query_history;
https://docs.snowflake.com/en/sql-reference/account-usage/query_history
Beta support for Podman has also arrived for Mac users. You can check the location of your Podman executable by running which podman in the terminal and paste it into the relevant field to select the Podman connection. If the connection is successful, you can start using Podman instead of Docker.
"There are differences between usings before and after a namespace."
what are the differences?..
with file-scoped namespaces, usings are now inside that namespace, even if placed at the start of the file, aren't they?..
What is Chrome Apps Development?
Chrome Apps development is about creating apps that run on the Google Chrome browser. These apps work like normal software on a computer but are built using HTML, CSS, and JavaScript. They can even work offline and use features like notifications and file storage.
Main Features of Chrome Apps:
Works Offline – Can be used without an internet connection. Access to System Features – Can use USB, Bluetooth, and storage. Works on Multiple Devices – Supports Windows, macOS, Linux, and ChromeOS. Secure & Safe – Runs in a protected environment to prevent security risks.
How to Create a Chrome App?
Make a Manifest File – This tells Chrome how the app should behave. Design the App – Use HTML, CSS, and JavaScript for the layout and functions. Use Chrome Features – Add options like storage, notifications, or file access. Package the App – Bundle everything into one folder. Publish & Share – Upload it to the Chrome Web Store or share it manually.
Are Chrome Apps Still Supported?
Google is shutting down Chrome Apps on most platforms except ChromeOS (Chromebooks). Developers now use Progressive Web Apps (PWAs) instead, which work on all devices.
Is it possible to map device allocated memory to system memory?
while this was not possible in 2017, on the GH200 (Grace Hopper) and most likely on the future GB200 platform (Grace Blackwell) platforms this is fully possible.
The device memory shows up as a NUMA region, and both the CPU can access GPU memory and the GPU can access host memory seamlessly. This is because on the GH200 platform the CPU is connected to the GPU via a coherent off package chip-to-chip interconnect (NVLink-C2C).
recently I just fight for the same issue for one days. Maybe there is the same root cause for your problem. Below is my root cause and solution:
The Cause:
I add some defination of my data in the namespace before the defination of Form1. After I did this, the submenu "View Designer" dispeared. show as below: enter image description here
The solution:
Move the user defined class at the back of the Form1 defination.
Hope this is helpful.
Does not it have an out-of-box solution for that? Do I still have to use a work-around like this?
I got my answer: it's duplicating because there are two accounts - Google and iCloud. I am editing a contact of iCloud and default account selected in contact is Google so it's creating contact or we say updating it to Google rather than iCloud
Simply parse the JSON and extract the content field only, then write into file
response_json = json.loads(response.content)
file_content = response_json["content"]
with open('template.xlsx', 'wb') as f:
f.write(file_content)
To connect your localhost with a Slack app use ngrok.
Check this https://ngrok.com/docs/integrations/slack/webhooks/
use window.Worker instead of Worker
const worker = new window.Worker(stockfishPath);
Typescript error TP1001 when creating a Web Worker in Next.js 15.1.3
You can achieve that by modify Modifier of NavigationBar I am attaching example below, You can refer it...
NavigationBar(
modifier = Modifier
.shadow(
elevation = 8.dp,
shape = RoundedCornerShape(topStart = 20.dp, topEnd = 20.dp)
)
) {
// enter code here
}
Here is the output with applied 8.dp shadow you can modify it as your need.
If you just want to plot the dates with their related values
dates = ['2015-03-12','2015-03-12', '2015-03-20','2015-03-20']
values = [80, 55, 1, 100]
dates = pd.to_datetime(dates)
dates_str = dates.strftime('%Y-%m-%d')
plt.scatter(dates_str, values)
I ended up creating an aggregate table in the ETL phase instead.
@Michael's answer worked: creating a libs.versions.toml according to this official Android link. Thanks @Michael!
I encountered with same problem, but with lines, - unfortunatly, we can not increase opacity more than alpha, I suppose it's some bag...
Till I understood your question you are trying to achieve this kind of ui.
This Link can help you to achieve the ui, there are various kinds of GUI available which can help you. I have added code which can also help to achieve the UI, hope it will work.
Plugin :
awesome_drawer_bar: '<latest_release>'
AwesomeDrawerBar(
controller: AwesomeDrawerBarController,
menuScreen: MENU_SCREEN,
mainScreen: MAIN_SCREEN,
borderRadius: 24.0,
showShadow: true,
angle: -12.0,
backgroundColor: Colors.grey[300],
slideWidth: MediaQuery.of(context).size.width*.65,
openCurve: Curves.fastOutSlowIn,
closeCurve: Curves.bounceIn,
)
When you compile your model with keras, adding two more parameters. This happens due to automatic parameter tracking and serialization mechanisms. Try to use model.optimizer.get_config() for see additional parameters.
try something like this
union TAttributeValue {
1: string stringValue,
2: i32 intValue,
3: double doubleValue,
4: bool boolValue
}
struct TRequest {
1: list<TAttributeValue> items
}
Below is what helped me reset my reactive form, with validations set to initial state after submission
import { FormGroupDirective} from '@angular/forms';
@ViewChild(FormGroupDirective) formGroupDirective!: FormGroupDirective;
// Add this after form submit
// To retain name value
const nameValue = this.form.get('name').value;
this.formGroupDirective.resetForm();
this.form.patchValue({ // Restore name value
name: nameValue
});
Got this answer from this excellent youtube video - https://www.youtube.com/watch?v=V4FKf8JDkC8
We have made improvements in this area recently, most recently in version 11.1.
Can I ask which version you're on? Does this still happen in version 11.1?
I managed to solve this by adding the following to <system.webServer> tag in Web.config;
<httpErrors errorMode="DetailedLocalOnly" existingResponse="PassThrough">
</httpErrors>
Overview of Java Class Hierarchy In Java, classes are loaded by class loaders, which are responsible for dynamically loading classes into the Java Virtual Machine (JVM). The class loading hierarchy includes:
Bootstrap Class Loader: Loads the core Java classes (e.g., java.lang.Object, java.lang.String) from the rt.jar file.
Extension Class Loader: Loads classes from extension libraries (e.g., ext directory).
System Class Loader: Loads classes from the system class path (e.g., CLASSPATH environment variable).
User-defined Class Loaders: These can be custom class loaders created by applications to load classes from specific locations.
Application-Level Classes Application-level classes are those that are specific to your application and are not part of the JDK or other system libraries. These classes are typically loaded by the System Class Loader or a User-defined Class Loader.
Characteristics:
Custom: These classes are written by developers for their specific applications.
Loaded by System or Custom Class Loaders: They are loaded from the application's classpath.
Not Part of JDK: Unlike classes like java.lang.String, which are part of the JDK.
Examples:
Any custom class you write for your application (e.g., MyService, UserModel).
Third-party libraries used by your application (e.g., Spring Framework, Hibernate) if they are not part of the JDK.
Contrast with System-Level Classes System-Level Classes:
These are classes that are part of the JDK or other system libraries.
Examples include java.lang.Object, java.util.ArrayList, java.io.File.
As per the documentation Gemini 1.0 Pro is no longer supported. You must migrate to a different, currently supported Gemini model to continue using the features you need. For the list of supported models, see Gemini models.
After protobuf 3.8
We can use this
parse.AllowUnknownField(true); .
But it well report warning msg to stderr in default, and the Parser has another API to catch the msg:
void Parser::RecordErrorsTo(io::ErrorCollector * error_collector)
reference: https://github.com/protocolbuffers/protobuf/issues/5465
how do you install the compiled snopt with build_pyoptsparse python script,could you tell me?Thanks very much!!!!
Thanks Partha for the answer, because of the same problem I got deeper to solve it and I want to share my information for the next one. detaching is correct but as microsoft said:
Detaching entities is a slow process that may have side effects. This method is much more efficient at clearing all tracked entities from the context.
it is better to clear it instead of detach it:
ChangeTracker.Clear();
however notice that it clear the state of all entities and not only one of them. the clear method in EntityTracker is:
public virtual void Clear()
=> StateManager.Clear(resetting: false);
at the end I chose another solution which was using ExecuteUpdateAsync() which is from EF Core 7.0.
ExecuteUpdate and ExecuteDelete are a way to save data to the database without using EF's traditional change tracking and SaveChanges() method.
the code example is something like this:
Context.EntityName.Where(e=> e.id=id).EexecuteUpdateAsync(s=> s.Property(p => propertyName, PropertyValue));
I just had this issue.
Number 4 is key. No download, import, moving files, changing directories, cashe directory change or registry changes made it for me. Reason i don´t specify all the steps I did is because this can be applied on many different apps and modules. And for installing pip and modules - there are good guides out there. Better than I can make.
you have to modified you function signaure from T to Nullable T? then return Resource.success(null) instead of throwing error you can modified function like below:
inline fun <reified T> Response<T?>.mapToResource(): Resource<T?> {
return if (this.isSuccessful) {
Resource.Success(this.body()) // Returns null if the body is empty
} else {
Log.e("Resource", this.message())
Resource.Error(this.message())
}
}
V = [0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, 10, 10.5, 10.7, 10.9, 11.1, 11.3, 11.5, 11.7, 11.9, 12.1, 12.3, 12.5, 12.7, 12.9] E = [10, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 65, 70, 75, 85, 90, 100, 105, 110, 130, 200, 290, 350, 420, 475, 530, 565, 590, 600, 610, 630, 630, 630]
Now getting this issue with Visual Studio Community 2022. Absolutely infuriating. Uninstalled and updated reg as was corrupt. Seems to not want to download from Microsoft for whatever reason.
@AndreiG is right. TimelineView exists to avoid things like messing with id to force a refresh.
TimelineView(.periodic(from: .now, by: 1)) { context in
Text(context.date.formatted(.dateTime.hour().minute().second()))
}
Adjust the formatting as you want.
I faced the same issue and these simple steps solved it for me: Open Android Studio -> Main Menu -> File -> Invalidate Caches and Restart. Restart the Computer after this, Open Android Studio and good to go!
Adding an empty comment line for me works
e.g.
//
// Command-line arguments definition
#[derive(Parser, Debug)]
#[clap(
YMMV
I finally solved my problem. thanks to this issue: https://github.com/tailscale/tailscale/issues/12563 which make me notice the "Override local DNS" setting to which I had not paid attention (greyed out when no global DNS are set).
I removed the restricted DNS entries I had created, set my local DNS as Global DNS, and ticked the "Override local DNS" setting in the DNS section of the Tailscale admin portal. It not works fine.
I found it. It's note.duration.quarterLength.
I reworked the object created in the Select statement of both queries by specifying separate properties rather than the Instance property and it worked. Still it is strange that EF cannot map Instance object once it is an object from the database.
It seems that plugins cannot be shared directly between independent modules. Instead, the sharing can only be achieved through inheritance between parent and child modules.
Since server version 2.00.6, DolphinDB introduced a tiered storage strategy, which is only applicable to cluster mode. Tiered storage allows older data to be migrated to slower disks or cloud storage (S3). Old data (cold data) stored locally is infrequently accessed but consumes many resources. With tiered storage, cold data can be stored in the cloud or moved from fast disks (e.g., SSDs) to slower disks (e.g., HDDs), effectively saving resources.
The architecture of tiered storage is:
Hot data storage volumes → cold data storage volumes (coldVolumes) → stale data (deleted)
For more information, please refer to docs:https://ci.dolphindb.cn/en/Tutorials/tiered_storage.html
It is NOT slow it appears to be slow
CLI spits output word by word immediately after hitting enter. In contrast, 'langchain' collects the entire output first, consuming 15-20 seconds, depending on the length of the response, and then spits out ... Boom... Even subprocess.run() has the same effect.
Workaround:
import os os.system('ollama run llama3.2:1b what is water short answer ') and then run the python script from the terminal: python main.py
Here, you can see output almost immediately as a stream.
Save the output in a text file that can be used in your Python script.
os.system('ollama run llama3.2:1b what is water short answer > output.txt')
to append the text file:
os.system('ollama run llama3.2:1b what is water short answer >> output.txt') I have posted this answer on GitHub as well
have the same problem, did you fix it?
Got an answer for it.
if (body == null) {
if (typeOf<T>().isMarkedNullable) {
Resource.Success(null as T)
} else {
Resource.Error("APi responded with success but there is no data Available.")
}
}
I could use the isMarkedNullable function to know if the value can be nullable
I needed relative pathes to all files in some "root" folder and came up with that solution with the help of previous answers here
realpath --relative-to=$PATH_TO_DIR "$(find $PATH_TO_DIR -type f)"
One also can cd into directory $PATH_TO_DIR is pointing to, and change $PATH_TO_DIR to .
*This solution works for unix-based systems, where realpath utility is present
it because the ref_no is using directly in the payload. The body in the payload must be a json object
body = {"prepayId": ref_number} body_json = json.dumps(body, separators=(',', ':')) payload = f"{timestamp}\n{nonce}\n{body_json}\n"
It looks like there's a type mismatch in your Next.js project, possibly due to an issue with how you're exporting or structuring the page.tsx file.
Ensure the File is Named page.tsx
Next.js 13+ requires pages inside app/ to be named page.tsx, not Page.tsx or anything else.
Correct file structure:
app/ dashboard/ manage-users/ page.tsx ✅
❌ Incorrect file name:
app/ dashboard/ manage-users/ Page.tsx ❌ Wrong! Possible Causes: Incorrect Import Usage It looks like you're trying to import a page (page.tsx or page.js) directly into another component. In Next.js, page.tsx files are meant for route handling and aren't typically imported like regular components.
Issue with OmitWithTag and Route Components Next.js page files contain specific properties (config, generateStaticParams, etc.), which TypeScript is trying to omit in a way that results in an invalid type.
Expecting a Record but Getting a Component The error suggests that it's expecting an object with no additional properties ({ [x: string]: never; }), but it's receiving a component or an import that does not match this constraint.
I have changed my page.tsx exports properly and this error disappeared.
Python 2.7.3 is too old. Upgrade to at least 2.7.9 or better to the latest one, 2.7.18.
You have the two trainable parameters from the network, weight and bias, and there are two non-trainable params from the optimizer. This link explains it pretty well.
If the linkedlist has one node, this condition will return True , which indicates that the linkedlist is empty? is that how it suppose to be. I think
def isEmpty(self):
return self.head is None
will be better
Matillion does it all but slightly clunky. There isn't much info online about Matillion to support you, either with Google, Stack, or LLMs. The APIs aren't well documented and the Git integration is very poor. It does work though.
You might want to check out DBT. That's one of the main alternatives (We're switching from Matillion to DBT+airflow).
I've fixed this issue: Move the rules from job myjob to .template, because SOURCE_PARAMETER and CLASSIFICATION are outside variable for .template:
.template:
tags:
- xxx_cicd_test
allow_failure: false
script:
- echo "SOURCE_PARAMETER:$SOURCE_PARAMETER"
- bash scripts/build.sh $CLASSIFICATION
rules:
- if: $SOURCE_PARAMETER == "$CLASSIFICATION"
myjob:
extends: .template
parallel:
matrix:
- CLASSIFICATION: "param_1"
- CLASSIFICATION: "param_2"
- CLASSIFICATION: "param_3"
Remove node_modules and package-lock.json Run the following command in your terminal: rm -rf node_modules package-lock.json
In my case, the issue was caused by an empty factor level in one of my categorical variables. Try to look for empty factor levels with table(dataset$factor_variable)!
In Angular Material 19 you can do it this way:
your-component {
box-shadow: var(--mat-sys-level3);
}
More details in the official docs.
Optimize the database indexes like this:
ALTER TABLE users ADD INDEX idx_id_active (id, is_active);
Implement eager loading if you're frequently accessing relationships:
protected $with = ['roles', 'permissions'];
I encountered this same error when testing Keycloak 26.1.0 Login using jmeter 5.6.3.
The response data body shows error - Restart login cookie not found. It may have expired; it may have been deleted or cookies are disabled in your browser. If cookies are disabled then enable them. Click Back to Application to login again.
The keycloak logs shows cookie_not_found error - 2025-02-28 16:52:33,459 WARN [org.keycloak.events] (executor-thread-11) type="LOGIN_ERROR", realmId="28bc2e7e-8095-4c80-b05c-da61c242500c", realmName="myrealm", clientId="testclient1", userId="null", ipAddress="127.0.0.1", error="cookie_not_found"
I also updated Realm Setting-> Security Defenses -> content-security-policy to 'self, them'.
Below is my JMeter Setup under Thread Group. 1. HTTP Cookie Manager 2. HTTP Request for www.keycloak.org/app 3. HTTP Request for localhost:8080/realms/myrealm/protocol/openid-connect/auth and set client_id, redirect_uri, state, response_code, response_type, scope, nonce 4. HTTP Request for localhost:8080/realms/myrealm/login-actions/authenticate?session_code=${session_code}&execution=${execution}&client_id=testclient1&tab_id=${tab_id}&client_data=${client_data}
Are there any special settings required? Thanks.
I think I found the reason why the missing modules, see: Getting a list of DLLs currently loaded in a process C#
"After CLR v4, the accepted answer will show only unmanaged assemblies."
Using Microsoft.Diagnostics.Runtime will show managed assemblies too.
Although the numbers are still different, but now I have all.
I had a similar issue. In my case, I used a privately hosted repository and was behind my companies proxy. My npm seemed to hang on build sill idealTree, but produced a 503 Service Unavailable after 250 seconds.
The no_proxy setting in the .npmrc was ignored. Adding the url of my repo to the environment variable NO_PROXY solved the issue.
The issue was caused by me using a deprecated task: qetza.replacetokens.replacetokens-task.replacetokens. After upgrading from v3 to v6 the secret was hidden.
My original question was missing this detail.
You could try creating an SQL class from the string you get, something like:
from psycopg.sql import SQL
...
query = SQL(qb_obj.getFinalQuery())
await acur.execute(query)
...
Looks like i have a \r character in the input file. Hence the moment i print the API_NAME, the rest of the line gets printed in the next line. When the loop goes through the next iteration, the previously printed line gets over written.
To solve this, i have remove the "\r" char in the API_NAME.
tr -d '\r'
I have faced the same issue, captureInheritedThemes property is not awailable for Flutter 3.29.0.
You can use showMenu instead of PopupMenuButton. The negative side of showMenu is position should be handled manually.
I've just tried out CodeMaid, and although the automatic formatting didn't do quite what I wanted, it's "CodeMain Spade" view is EXACTLY what I wanted - it shows a list of all your methods in the selected file and you can just drag and drop items around in the list and it automatically
this is how my code looks like rn ->
package org.socgen.ibi.effectCalc.jdbcConn
import com.typesafe.config.Config
import org.apache.spark.sql.types._
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import java.sql.{Connection, DriverManager, Statement}
import org.socgen.ibi.effectCalc.logger.EffectCalcLogger
import org.socgen.ibi.effectCalc.common.MsSqlJdbcConnectionInfo
class EffectCalcJdbcConnection(config: Config) {
private val microsoftSqlserverJDBCSpark = "com.microsoft.sqlserver.jdbc.spark"
val url: String = config.getString("ibi.db.jdbcURL")
val user: String = config.getString("ibi.db.user")
private val pwd: String = config.getString("ibi.db.password")
private val driverClassName: String = config.getString("ibi.db.driverClass")
private val databaseName: String = config.getString("ibi.db.stage_ec_sql")
private val dburl = s"$url;databasename=$databaseName"
private val dfMsqlWriteOptions = new MsSqlJdbcConnectionInfo(dburl, user, pwd)
private val connectionProperties = new java.util.Properties()
connectionProperties.setProperty("Driver", s"$driverClassName")
connectionProperties.setProperty("AutoCommit", "true")
connectionProperties.put("user", s"$user")
connectionProperties.put("password", s"$pwd")
Class.forName(s"$driverClassName")
private val conn: Connection = DriverManager.getConnection(dburl, user, pwd)
private var stmt: Statement = null
private def truncateTable(table: String): String = {
"TRUNCATE TABLE " + table + ";"
}
private def getTableColumns(
table: String,
connection: Connection
): List[String] = {
val columnStartingIndex = 1
val statement = s"SELECT TOP 0 * FROM $table"
val resultSetMetaData =
connection.createStatement().executeQuery(statement).getMetaData
println("Metadata" + resultSetMetaData)
val columnToFilter = List("TO ADD")
(columnStartingIndex to resultSetMetaData.getColumnCount).toList
.map(resultSetMetaData.getColumnName)
.filterNot(columnToFilter.contains(_))
}
def pushToResultsSQL(ResultsDf: DataFrame): Unit = {
val resultsTable = config.getString("ibi.db.stage_ec_sql_results_table")
try {
stmt = conn.createStatement()
stmt.executeUpdate(truncateTable(resultsTable))
EffectCalcLogger.info(
s" TABLE $resultsTable TRUNCATE ****",
this.getClass.getName
)
val numExecutors =
ResultsDf.sparkSession.conf.get("spark.executor.instances").toInt
val numExecutorsCores =
ResultsDf.sparkSession.conf.get("spark.executor.cores").toInt
val numPartitions = numExecutors * numExecutorsCores
EffectCalcLogger.info(
s"coalesce($numPartitions) <---> (numExecutors = $numExecutors) * (numExecutorsCores = $numExecutorsCores)",
this.getClass.getName
)
val String_format_list = List( "accounttype", "baseliiaggregategrosscarryoffbalance", "baseliiaggregategrosscarryonbalance", "baseliiaggregateprovoffbalance", "baseliiaggregateprovonbalance", "closingbatchid", "closingclosingdate", "closingifrs9eligibilityflaggrosscarrying", "closingifrs9eligibilityflagprovision", "closingifrs9provisioningstage", "contractid", "contractprimarycurrency", "effectivedate", "exposurenature", "fxsituation", "groupproduct", "indtypprod", "issuingapplicationcode", "openingbatchid", "openingclosingdate", "openingifrs9eligibilityflaggrosscarrying", "openingifrs9eligibilityflagprovision", "openingifrs9provisioningstage", "reportingentitymagnitudecode", "transfert", "closingdate", "frequency", "batchid"
)
val Decimal_format_list = List( "alloctakeovereffect", "closinggrosscarryingamounteur", "closingprovisionamounteur", "exchangeeureffect", "expireddealseffect", "expireddealseffect2", "newproductioneffect", "openinggrosscarryingamounteur", "openingprovisionamounteur", "overallstageeffect", "stages1s2effect", "stages1s3effect", "stages2s1effect", "stages2s3effect", "stages3s1effect", "stages3s2effect"
)
val selectWithCast = ResultsDf.columns.map(column => {
if (String_format_list.contains(column.toLowerCase))
col(column).cast(StringType)
else if (Decimal_format_list.contains(column.toLowerCase))
col(column).cast(DoubleType).cast(DecimalType(30, 2))
else col(column)
})
print(s"This is selectWithCast for Results Table: $selectWithCast")
val ResultsDfWithLoadDateTime =
ResultsDf.withColumn("loaddatetime", current_timestamp())
print(
s"this is ResultsDfWithLoadDateTime: \n ${ResultsDfWithLoadDateTime.show(false) }"
)
val orderOfColumnsInSQL = getTableColumns(resultsTable, conn)
print(s"This is order of columns for results table: $orderOfColumnsInSQL")
EffectCalcLogger.info(
s" Starting writing to $resultsTable table ",
this.getClass.getName
)
ResultsDfWithLoadDateTime.select(selectWithCast: _*).select(orderOfColumnsInSQL.map(col): _*).coalesce(numPartitions).write.mode(org.apache.spark.sql.SaveMode.Append).format(microsoftSqlserverJDBCSpark).options(dfMsqlWriteOptions.configMap ++ Map("dbTable" -> resultsTable)).save()
EffectCalcLogger.info(
s"Writing to $resultsTable table completed ",
this.getClass.getName
)
conn.close()
} catch {
case e: Exception =>
EffectCalcLogger.error(
s"Exception has been raised while pushing to $resultsTable:" + e
.printStackTrace(),
this.getClass.getName
)
throw e
}
}
--------------------------------
now in this above code I want to not include the loaddatetime into the ResultsDf and rather exclude it from orderOfColumnsInSQL, can you tell me how it can be done