I found the answer to my question. While Google's API allows for batching a lot of different requests, the endpoint does not allow exporting requests. I found the better way is to make multiple requests at once to reduce my runtime, although this does end up using more bandwidth.
So what could be done is to control each slot's Application Insight settings with environment variables (https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-config for Java apps).
What I did is to add environment variable APPLICATIONINSIGHTS_METRIC_INTERVAL_SECONDS=<something_really_big_here> to dramatically decrease the frequency of reported metrics, hence the size of data ingested. A little bit hacky, but does the trick.
I'm also running into a problem. Im still a giant noob at this, having just started, but I wanted to use this command. What do spots do I fill out, or, what do I replace? Where do I put the uninstall string, what words do I delete, and how can I get the software name if that's required?
$software = Read-Host "Software you want to remove"
$paths = 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall', 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall'
Get-ChildItem $paths |
Where-Object{ $_.GetValue('DisplayName') -match "$software" } |
ForEach-Object{
$uninstallString = $_.GetValue('UninstallString') + ' /quiet /norestart'
Write-Host $uninstallString
& "C:\Windows\SYSTEM32\cmd.exe" /c $uninstallString
}
Having tested around this seems to be an issue when using an external keyboard with an Android Studio emulator.
I've managed to reproduce the infinite loop when typing using an external keyboard in both my production app and a brand new app, using the code in the question.
Using the emulator keyboard and using a real device keyboard doesn't cause the infinite loop issue when using a TextFieldValue.
I can only assume this is a bug with the Emulator.
According to this article:
https://learn.microsoft.com/en-us/cpp/windows/determining-which-dlls-to-redistribute?view=msvc-170
It says:
Visual Studio 2022, 2019, 2017 and 2015 all have compatible toolset version numbers. For these versions, any newer Visual Studio Redistributable files may be used by apps built by a toolset from an older version. For example, Visual Studio 2022 Redistributable files may be used by apps built by using the Visual Studio 2017 or 2015 toolset. While they may be compatible, we don't support using older Redistributable files in apps built by using a newer toolset. For example, using the 2017 Redistributable files in apps built by using the 2019 toolset isn't supported.
So technically, all apps relying on redistribution package of 2015 should work with newer versions. The only thing that makes me skeptical is using may be keyword in their documentation. Therefore, please be cautious with that since this is a big change :)
The "true" is not correct!
Try "false"! But: the registration will always be successful - but you will not trigger a call nor fetching an incoming call - thats my findouts from today (March 24, 2025).
BTW: I am trying to register to my Fritz.Box and to fetch then a incoming call. no luck at all --- currently.
Anyone else has a working test app?
var sp = new SIPAccount(true, registerName, registerName, registerName, registerName, domainHost, 5060);
The call to the /api/2.2/jobs/run-now REST API only triggers the job. You'll need to call different APIs to get the output. The call to jobs/run-now should return a run ID.
If that's successful, then the next step are to:
- check the status of the job to make sure it's completed running using this API: /api/2.2/jobs/runs/get. You may have to loop until the job is done or failed.
- once the job is done, you can get the output for that run using this API: /api/2.2/jobs/runs/get-output
One line in your fitness function looks suspicious.
private long fitness(Genotype<BitGene> genotype) {
var bitChromosome = genotype.chromosome().as(BitChromosome.class);
// Is this a field in your class? Should be a local variable.
variables = bitChromosome.stream()
.mapToInt(gene -> gene.bit() ? 1 : 0)
.toArray();
var objects = dataModel.getObjects();
...
}
If variable is defined outside your the fitness function, it will be shared between several threads during the evaluation. This will lead to an undefined behavior and might explain your results. The fitness function must thread safe and/or re-entrant.
private long fitness(Genotype<BitGene> genotype) {
var bitChromosome = genotype.chromosome().as(BitChromosome.class);
// Make it a local variable.
var variables = bitChromosome.stream()
.mapToInt(gene -> gene.bit() ? 1 : 0)
.toArray();
var objects = dataModel.getObjects();
...
}
Regards, Franz
I tried to make it work by modifying my build files but nothing. I finally had to remove the library and clean the project so that my build passes.
Reviving this thread I've been testing the same in my iOS app which uses the Mapbox SDK. Anyone have success bringing their own DEM sources in Mapbox?
After adding a custom DEM with elevations encoded in RGB according to the Mapbox formula (height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1), I do see elevation changes on my map when 3D is toggled on, but they are widely exaggerated spikes (mountains are something like hundreds of KM tall). As a hack, I tried reducing terrain exaggeration by 100x but it nearly flattened them. Iterative values in between resulted in non-useful spikes, so I think I'm barking up the wrong tree.
Any ideas?
if !mapView.mapboxMap.sourceExists(withId: "my-custom-dem") {
var source = RasterDemSource(id: "my-custom-dem")
source.url = "mapbox://username.my-custom-dem"
source.encoding = .mapbox // Tells Mapbox how to decode the RGB values
source.maxzoom = 16.0
source.minzoom = 0.0 // Allow full zoom range from 0-16
As a good option to build rebuild Kaniko image
FROM alpine:latest AS build
RUN apk add --no-cache git
FROM gcr.io/kaniko-project/executor:v1.23.2-debug
COPY --from=build /usr/bin/git /usr/bin/git
This answer is different than the others posted here. Starting with Angular 17, the RouterTestingModule is deprecated. For my use case, after upgrading to Angular 17, my VsCode began flagging this deprecation:
RouterTestingModule must be replaced by provideRouter, but note this is a provider now, not an import. So the following error was thrown by the Jasmine compiler when provideRouter was placed under imports:
The solution was to simply move the provider to where it should have gone (under providers) - and the error was no longer thrown:
This blog entry shows a full example using provideRouter.
You can try using the eval() function, it evaluates a string, and executes it as python code if it is valid as python code.
It seems the default is 5 minutes:
internal sealed partial class DefaultHybridCache : HybridCache
{
internal const int DefaultExpirationMinutes = 5;
I am getting the same error in my release pipeline while trying to run the dataform --run, i am using service account key to authenticate to run dataform deployment to google cloud using ADO release pipeline. I have tried the code from my local machine and it worked fine, having this issue only when i try to run it from ADO release pipeline, I have installed node.js and also npm, the compile is running fine telling all the changes the sqlx file is going to perform(actions), but when i try dataform --run , throwing the below error
Dataform encountered an error: Unexpected property "type", or property value type of "string" is incorrect.[0m
2025-03-22T13:55:25.4594218Z [91mReferenceError: Unexpected property "type", or property value type of "string" is incorrect.
2025-03-22T13:55:25.4595203Z at /azp/_work/r21/a/_Dataform Build/Build_Dataform_Artifacts/.npm/node_modules/@dataform/cli/bundle.js:137:23
2025-03-22T13:55:25.4595912Z at Array.forEach (<anonymous>)
2025-03-22T13:55:25.4596630Z at checkFields (/azp/_work/r21/a/_Dataform Build/Build_Dataform_Artifacts/.npm/node_modules/@dataform/cli/bundle.js:118:33)
2025-03-22T13:55:25.4598307Z at verifyObjectMatchesProto (/azp/_work/r21/a/_Dataform
Understanding the error given by Typescript is crucial. It is trying to say when your code expects something of A type, you are giving it something of B type.
One of the easiest way to solve this issue is by matching the type of the incoming object with the type of the your defined/expected object.
There are several ways to achieve this solution. One, of the way is to make a helper function to convert your output into desired output.
//helper function to convert from string type to GraphQLCode type
function toGraphQLCode(value: string): GraphQLCode {
const enumValues = Object.values(GraphQLCode).filter((v) => typeof v === "string");
if (enumValues.includes(value)) {
return value as GraphQLCode;
}
throw new Error(`Invalid GraphQLCode value: ${value}`);
}
interface MyArrayItem {
code: GraphQLCode;
// ...other fields
}
const myArray: MyArrayItem[] = [];
// codeMapper that returns strings that matches enum values
const codeMapper = {
someKey: "SOME_VALUES",
// Add other mappings....
} as const;
// Example usage in your resolver or logic
const code = "someKey"; // Your logic/input
myArray.push({
code: toGraphQLCode(codeMapper[code as keyof typeof codeMapper]),
// ...other fields
});
// Example resolver (if this is part of a GraphQL resolver)
const resolvers = {
Query: {
myResolver: () => {
const myArray: MyArrayItem[] = [];
const code = "someKey";
myArray.push({
code: toGraphQLCode(codeMapper[code as keyof typeof codeMapper]),
// ...other fields
});
return myArray;
},
},
};
I guess this sample code should help you out. Pls let me know if the error still exists. We can debug further!
My question is it is another version of powershell or module or tool or what?
According to official documentation:
A .NET tool is a special NuGet package that contains a console application.
This applies to PowerShell. dotnet tool, like python -m pip and myriad alternative examples across programming languages' official implementations, is the official package manager for DotNet (.NET).
Consequently, the aforementioned command installs PowerShell in a manner that your OS's package manager doesn't understand, but which is standard amongst .NET packages.
And another question is that without this Powershell(dotnet global) – Will I not be able to install any module?
It supports modules installed via Install-Module.
cursor.commit()
This command was needed at the end as per commenter.
Tried all the suggested methods but none worked for me. What finally did was going to Xcode -> Settings -> Accounts and signing in again with my dev account. After that previews started working again.
For me, if I imported IonDatetime, that ionChange event will not fire. If I omitted the import, the event fired.
Depends on what you are trying to achieve. A good example of setting min/height values would be for situations with lesser content. Sticky footers are what come to mind... https://css-tricks.com/couple-takes-sticky-footer/
Create a snippet of your menu issue and you may get a better answer.
QNX is in the process of moving all of our porting activity to GitHub. Try out this README for Boost specifically:
https://github.com/qnx-ports/build-files/tree/main/ports/boost
If you are still encountering this issue, just manually register the App\Providers\FortifyServiceProvider::class inside of bootstrap/providers directory and run php artisan optimize:clear and that should solve the issue
android:usesCleartextTraffic="true"
use this in android/app/main/AndroidManifest.xml
worked for me
Is there a reason it needs to be detected within the iframe? Because you could try detecting the input in the parent window and checking if the iframe is active.
Go to File> Preferences > Settings, You'll see Extensions> Emmet > scroll down and you'll find Preferences. Click Edit in settings.json Use the code from below:
"emmet.preferences": {
"output.inlineBreak": 1,
},
these two headers are mandatory
Content-Type: application/json
aeg-event-type: Notification
My ISP blocks traffic on specific ports (including port 80). By changing the port, my site is accessible from the outside!
The root cause of the problem was that I was didn't change the job name between runs. Thanks to @JayashankarGS's answer for showing, correctly, an example with updated job names: job_name3, job_name5, etc. The YAML file and command() statement in the original question work correctly when name is changed to be unique or omitted entirely.
So I reported this also to the SAS Support and the result of the investigation/discussion is:
SAS R&D [...] confirm that what you have experienced is a bug (SAS R&D = SAS Research and Development Division) as well as
SAS R&D have now confirmed that this issue should be fixed in the 2025.03 release of SAS Studio
So: Case (provisionally) closed. 🕵️♂️
It isn't the responsibility of the data layer to handle the user input.
If we are talking about user input try to add uniqueness validation into front end form or on the backend handler.
The better to use both options. The front end checks uniqueness among editing pairs but the backend handler checks uniqueness among all existing pairs (it needs additional query to the database).
When validations are passed then you can safely send the data to the data layer for saving.
There are some fine and informative solutions slaready,. but I thoguht I'd share my solution. This solution is a method which can be added to any mode class, to give single instances a rough equivalent to the queryset .update(...) method, with the same arguments syntax. It makes use of the update_fields keyword argument to the model's save method, which enables more efficient behind-the-scenes database updating.
Essentially, it is a wrapper around calling instance.save(...) (whether you have overridden it in your model or not) that behaves the same, argument-wise, as queryset.update(...), and is more effient that calling the "full" save method (for most purposes anyway), and also calls the pre- and post-save signals (but provides an argument for ignoring these like a true queryset update, as well). It also allows passing in an arbitrary number of dictionaries as positional args, which will be automatically converted to keyword args for you.
from django.db.models import ForeignKey, ManyToManyField, OneToOneField, JSONField
from django.contrib.postgres.fields import ArrayField
from django.db.models.manager import Manager
class YourModel(models.Model):
...
def log(self, *args, exception=None, **kwargs):
""" Optionally, usethis method to define how to use your logging setup to log messages related to this
instance specifically. For this SO opost I wuill simply assume 'you've got a logger defined globally,
and this method calls it. But some creative logging could produce highly useful organizational/
informational enhancements. """
(logger.exception if isinstance(exception, Exception) else logger.log)(*args, instance=self, **kwargs)
# Presuming that you've setup your logging to accept
# an instance object, which modifies where it is logged
# to, or something. Feel free to modify this method however
# you see fit.
def update(self, *update_by_dicts, skip_signals=False, use_query=False, refresh_instance=True,
validate_kwargs=False, allow_updating_relations=True, **kwargs):
"""
update instance method
This method enables the calling of instance.update(...) in approximately the same manner
as the update method of a queryset, allowing seamless behavior for updating either a
query or a single instance that's been loaded into memory. It provides options via the
keyword args as described below.
NOTE that providing positional (non-keyword) arguments can be done; if it is, they must each be
a dictionary, which will be unpacked into keywpord arguments as if each key/value pair
had been passed in as a keyword argument to this method.
Args:
skip_signals: If True, then both the pre- and post-save siognals will be ignored after .save is
called at the end of this method's execution (the behavior of a queryset's update method).
You can also pass this in as 'pre' or 'post'; if you do, then the pre_save/ or post_save
signal, respectively, will be skipped, while the other will execute. The default value for
this argument is False, meaning that both pre and post_save are called, like a normal save
method call.
use_query: Normally, this method obviates the need to query "self" (which has already been loaded
fr0om the database into an instance variable, after all) by utilizing the save method, but if
for some reason you wouold prefer not to have this behavior, passing in use_query=True will
cause the method to use a different approach, and will self-query using the ORM, and then call
the typical update method on the resulting one-element queryset. In this case, signals will be
skipped regardless of the value specifyt for the skip_signals argument. However, any positional
argument dicts provided will still be unpacked as passed in as keyword args.
refresh_instance: Only does anything if use_query is True; then, if refresh_instance is True, it will
call refresh_from_db after the ORM update(s), to make sure this instance isn't outdarted. If you're
not goiong to use the instance anymore afterwards, specifying refresh_instance=False saves some time
since it won't re-query it from the database.
validate_kwargs: Normally, all keyword arguments (and keys from positional argument dicts, if any)
will be blindly passed to self.save. Hpwever, if there is any chance that keys supplied may contain
values that do not correspond to existing fields in the model (such as in some system using
polymorphism or other forms of inheritance, where each child model may have some fields unique only to
that model), you can specify validate_fields=True to check all of the fields against their presence in the
instance (using hasattr and discrading if it returns False); specifying a list of argument names instead
will only check those arguments. This adda a nominal amount of overhead time cost to the execution of
Python, so it shoyuld ony be used if it is needed, but it solves a couple of issues related to model
inheritance and/or polymorphism, and protects against dynamic instance updating situations going wrong, too.
allow_updating_relations: If True (which is the default), it enables passing in fields of models accessed through
relations (like ForeignKeys or ManyToManyFields, or their reverse relationships), via standard Djkango query
syntax using double underscores. These updates are done through a normal ORM queryset update, for
efficiency. If False, any argument whose name contains double underscores will not be valid, unless
it is use to reference a key in a JSONField, or an index in an ArrayField.
* PLEASE NOTE that manytomany and reverse foreignkey fields WILL NOT WORK without being validated by providing
validate_kwargs=True or including the related_name relation set manager name in the list provided to validate
kwargs.
kwargs: All other keyword arguments will be interpreted as field names to update, and the values to update them
to. Please note also that any positional argument dicts will be unpacked and literally merged in with
the kwargs dict, with priority in the case of duplicate keys being given to those given in kwargs.
Returns:
A dict of the fields that were not successfully updated with the error message associated with why it failed.
All expected potential exceptions are caught and logged and gracefully handled/logged, to avoid interruption
of your app/program, so the return value can be used to tell you if there were any issues.
"""
if skip_signals is True:
skip_signals = set(['pre_save', 'post_save'])
elif isinstance(skip_signals, str):
skip_signals = set([skip_signals])
elif isinstance(skip_signals, (list, tuple)):
skip_signals = set(skip_signals)
failed_keys = set()
separate_queries = []
self_query_updates = dict()
# Merging all keys and their values from any posotional dictionaries provided, directly into kwargs for ease of
# processing later.
for more_args in update_by_dicts:
for key, val in more_args.items():
if key not in kwargs: # If key was passed explicitly as a kwarg, then we prioritize that and ignore it here
kwargs[key] = val
# If argument validation is desired, we'll do that here, and log any removed after removing those entries from kwargs
if validate_kwargs is True:
# If it is the literal value True, we'll convert it to a list containing the names of every field we requested to update
validate_kwargs = [ key for key in kwargs ]
if validate_kwargs:
# Unless it is False/None, we have at least one field to validate, and will do so now using hasattr, and delete amy
# key/value pairs where the key is not a valid name of an attribute in this instance (or a relation, if applicable).
for field_name in validate_kwargs:
if not (result := check_field_name_validity(self, field_name, value, allow_relations=allow_updating_relations)):
failed_keys.add(field_name)
del kwargs[field_name]
else:
if isinstance(result, dict):
# This is a related query, and the function has returned information about that query
separate_queries.append(result)
del kwargs[field_name] # Deleting from kwargs, since it will be called due to its reference in separate_queries
#elif (result is True) and use_query:
# # Converting any True result to dicts representing ORM queries to use, if use_query argument is True
# separate_queries.append({
# 'type': 'self',
# 'manager': type(self).objects.filter(pk=self.pk),
# 'update_statement': field_name,
# })
# del kwargs[field_name]
if len(kwargs) > 0:
upd_fields = set()
if isinstance(skip_signals, list):
self.__skip_signals = skip_signals
else:
try:
del self.__skip_signals
except:
pass
for field_name, value in kwargs.items():
# Looping through any keys remaining in kwargs in order to modify this instance's fields accordingly, and then call
# self.save(update_fields=[...]) to perform a database UPDATE only on the changed fields for efficiency.
# If one or both save signals are to be skipped, we'll add attributes to the instance; I will lave it to the reader to
# modify the signal receiver(s) to check for the presence of said attributes, and return without doing anything if they're
# present and True.
# If any exceptions are raised, we'll catch, log, and inform the caller in the returned list of failed field names
try:
if use_query:
self_query_updates[field_name] = value # Advantage to using a self query is we don't pre-process, just execute updates as-is
else:
if '__' in field_name:
# traversing the path of indexes/attrbute names for the field, since it has double underwcores. The code below will handle JSONFields, ArrayFields,
# and relations for ForeignKeys and OneToOneFields, in the case that those fields were not validated
path_toks = field_name.split('__')
real_obj = None
if hasattr(self, path_toks[0]) and isinstance(getattr(self, path_toks[0]), models.Model):
result = check_field_name_validity(self, field_name, value, allow_relations=allow_updating_relations)
if not result:
raise ValueError(f"Non-validated kwarg '{field_name}' appears to be a related instance, but there was a problem with the field")
else:
separate_queries.append(result) # If it validates, adding it to separate_queries to avoid code duplication
continue
for attr in path_toks:
real_obj = self if not real_obj else ( else real_obj[last_key])
last_key = int(attr) if attr.isdigit() else attr
else:
upd_fields.add(path_toks[0]) # Add just the root of the field's 'path', as that is the JSONField/etc that we'll update in save()
real_obj[last_key] = value
else:
setattr(self, field_name, value)
upd_fields.add(field_name)
except Exception as e:
self.log("Exception encountered during processing of field '{field_name}'", exception=e)
failed_keys.add(field_name)
if use_query:
# Executing "self-query" by filtering model class for the single instance and using atomic update method of queryset
type(self).objects.filter(pk=self.pk).update(**self_query_updates)
# Then, refreshing the instance from DB so we don't have outdated field values (unless not needed, via arguments)
if refresh_instance:
self.refresh_from_db()
else:
# Finally, calling save with update_fields
self.save(update_fields=upd_fields)
try:
del self.__skip_signals
except:
pass
# Lastly, executing whatever separate queries may have been requested due to related model fields
for qrydef in separate_queries:
try:
qrydef['manager'].update(**qrydef['update_statement'])
except Exception as e:
self.log(f"Exception while processing separarte model query defined by {qrydef}", exception=e)
faled_keys.add(qrydef['key'])
return failed_keys
(Outside of the model class definition)
def check_field_name_validity(instance, field_name, value, allow_relations=True):
if not hasattr(instance, field_name):
if '__' in field_name:
path_tokens = field_name.split('__')
arg_valid = False
try:
if hasattr(instance, path_tokens[0]):
# Usage of isinstance allows subclasses of these field types to be recognized
obj = getattr(instance, path_tokens[0])
if isinstance(instance._meta.get_field(path_tokens[0]), JSONField):
if len(path_tokens) > 1:
for nextkey in path_tokens[1:]:
obj = obj[nextkey]
# If we've made it to the end of the path of keys, then this argument is valid
return True
elif isinstance(instance._meta.get_field(path_tokens[0]), ArrayField):
value = obj
for index in [ (int(x) if x.isdigit() else x) for x in path_tokens[1:] ]:
# We converted each path "token" from the split on double underscore into
# an integer if it is a str representation of one, else left it as a str;
# this allows nested ArrayFields or ArrayFields made up of DictFields.
value = value[index]
# If we've made it to the end of the path of keys, then this argument is valid
return True
elif isinstance(instance._meta.get_field(path_tokens[0]), Manager):
# The fact that its class is Manager means it is a reverse relation, so we'll
# need to check the validity of the rest of it by seeing if a query on it results
# in an exception. We'll use __isnull since it should work for any valid field
if not allow_relations:
raise TypeError(f"Field '{path_tokens[0]}' of field name '{field_name}' is a related set manager, but allow_relations is False")
search_path = "__".join(path_tokens[1:])
try:
if not obj.filter(**{qry_path + '__isnull'): False).exists():
raise ValueError(f"Relation field argument '{field_name}' does not exist or the query results in no matches")
except Exception as e:
raise e
else:
return {
'key': field_name,
'type': 'collection',
'manager': obj,
'update_statement': {"__".join(path_tokens[1:]): value},
}
else:
# We'll assume it is a OneToOne/ForeignKey field and if it's not it will error, which tells us it's invalid.
# 'obj' should contain the followed reference to the related instance already, if so.
# We'll recursively call this function on the reated object and return the result. Recursion is a beautiful thing!
# If you don't know, now you know.
if not allow_relations:
raise TypeError(f"Field '{path_tokens[0]}' of field name '{field_name}' is a related model instance, but allow_relations is False")
arg_valid = check_field_name_validity(obj, "__".join(path_tokens)[1:], value, allow_relations=True)
if arg_valid:
return {
'key': field_name,
'type': 'instance',
'manager': type(obj).objects.filter(pk=obj.pk),
'update_statement': {"__".join(path_tokens[1:]): value},
}
else:
raise ValueError(f"Field '{field_name}' cannot be found in the instance nor any related managers or instances")
except Exception as e:
# If any exception is caught at all, it means this is not a valid argument, and we'll log the exception and remove it from kwargs
instance.log(f"Field '{field_name}' is invalid. Reason = {e}", exception=e)
return False
*NOTE: this 'is similar to what I use in my web platform I've been developing for the past couple of years, but I made changes that I havcen't tested and am prone to typos. If you use this and find issues, please let me know and I'll fix them in this post.
As pointed out in the comments, the issue stems from a breaking change introduced by setuptools>=78. A workaround is to use the PIP_CONSTRAINT environment variable to tell pip to use a lower version of setuptools. For instance, in a file named pip-constraint.txt:
setuptools<78
and then:
PIP_CONSTRAINT=pip-constraint.txt pip install stringcase
works for any python version.
Turns out this was a permissions issue against the artifact respository that I use. Temporarily resetting my npmrc and hitting the normal npm respository fixed this.
Apologies, I wasn't clear so I created a debate around data validation, as I say my hands are tied to the branch system we are supplied, and being NHS, the options are limited.
Thanks to the suggestion from @ThomA, I used a windowed function Count to get what I was looking for, as mentioned from Alan, the second part was a simple join with a null check so here is the final query to obtain the list.
WITH cte AS (
SELECT PatientId, CardId, Surname, Forenames, DateOfBirth, PostCode, Branch,
COUNT(CardId) OVER(PARTITION BY Surname, ForeNames, DateOfBirth, PostCode) as [records]
FROM pmr.Patient
WHERE Branch = 9
)
SELECT cte.* FROM cte
LEFT JOIN pmr.[Session] s ON cte.PatientId = s.Patient AND cte.Branch = s.Branch
WHERE records > 1 AND s.Patient IS NULL
ORDER BY Surname, Forenames
Gemma 3 requires transformers version 4.45.0.dev. Please install this specific version using the provided command - !pip install git+https://github.com/huggingface/[email protected] and try again. I have tried replicating the error and able to get the tokenizers successfully. See the attached gist for more details. Thank you
I find it rather funny (not) that there is such an incompatibility / unfriendly behaviour of git am when working with CRLF files:
foo.cpp: C source, ASCII text, with CRLF line terminators
I'd have expected to be able to do a plain format-patch, immediately followed by a corresponding plain git am, without any deviation shenanigans occurring. That would have been proper usability.
However, doing so fails both in another repository and in the original repository itself (when sitting at the correct pre-commit revision) - with both repos having identical .git/config settings.
patch -p[X] < foo.patch
however does work (but of course one will be missing out on the full cooked toolchain-provided commit handling then).
git am --keep-cr (thanks!) appears to work, but with
warning: quoted CRLF detected
message.
git-svn repository here as well, so maybe that's the complication.
composer require akshay/laravel-url-maintenance
The akshay/laravel-url-maintenance package allows you to easily put a specific URL or route of your Laravel application under maintenance or bring it back up. It provides Artisan commands to manage site maintenance on a per-route basis.
Know that it will fail. I mean, how often are you making changes anyway??? When it fails, replay the build and hit it again. Hit it twice to make it nice.
Some instances cannot be resolved with a depends_on like moving a subnet from one NAT gateway to another. No way around it, you have a 10-second outage and you have to apply twice. -target on the one you're removing it from and subsequent -target on the one that is receiving will reduce the outage...
Use a newer version of terraform:
https://developer.hashicorp.com/terraform/language/meta-arguments/depends_on
Note: Module support for depends_on was added in Terraform version 0.13, and prior versions can only use it with resources.
You can change the mustache that handles the model naming. Not easy but this would be a good alternative as opposed getting rid of it.
I have never seed names like this. Maybe you have some other mistakes that confuse the generator to add the number sufix ?
GPT chat suggested brew install jimtcl
Not sure you're still interested in an answer, but here are some thoughts.
'rules.value.list' seems to accept single values, or modalities, only. So your lines would not work. As I understand it, these rules are based on links between variables, conditional syntheses if you wish. This does not allow to restrict the range of values taken by variables, or give conditions and restrictions to single variables. I myself wish it would do so...
Maybe one of the reasons is that, by allowing so, values between the original and the synthesized datasets would differ so much that utility would be dramatically reduced (for instance if values like 900 or 1000 have to be synthesized to 700 maximum...). What you could is modify your variables in the original dataset so that all values above 700 actually are given the value 700?
As for your other issue, why not not synthesizing the variable 'net' at all, then sum the two 'payed' and 'received' variables in your final synthesized dataset? Sounds to me like the easiest solution.
Hope that helps
This package might be a great fit for your needs: https://github.com/TypiCMS/NestableCollection.
Yep same here, refreshing fine on Desktop, fail on Service, error message:
Data source error:The following system error occurred: Type mismatch. Table: POGoodsR.Cluster URI:WABI-NORTH-EUROPE-E-PRIMARY-redirect.analysis.windows.netActivity ID:b418b01a-90aa-48f3-8fbe-67db084ecd22Request ID:46f6bae7-6c08-4a65-9c2c-97ac94a059b6Time:2025-03-24 16:11:25Z
No solution yet 🤷♂️
You can use the mui ClickAwayListener API.
I'd love a modified Firefox fork with Node.js built in. But unfortunately no one has made one.
You can look at alternatives:
Maybe one of those will work better, but @sysrage is right, you should work on storing and retrieving your data in a better way.
Asciidoctor PDF currently only supports custom roles for styling paragraphs and phrases. If you need to add a role to a table and have that role affect the table style, you must create an extended converter that applies the style appropriately. Fortunately, there is an example in the documentation.
Instead of using basepath: "./company/the-tool", try setting basepath: "". TanStack Router is already handling relative paths based on the current URL. Since your app is under /company/the-tool, it should automatically resolve the correct routes
It is not currently implemented. See this feature request, https://issuetracker.google.com/375867285.
Not sure if you found an answer for this but I've had ZQ620 printers intermittenly disconnect from bluetooth on all devices used. It turned out to be an issue with the LinkOS version the printer was using had an issue and needed that to be updated on the physical unit. Once the latest version was installed had no issues being recognized after that.
Basic support for DuckDB was introduced in DbVisualizer 25.1.
The exact name property in Oracle JDBC is "defaultRowPrefetch". Try with that name and also making it part of the url with url?defaultRowPrefetch=1000 in case spark is not parsing it ok
Unfortunately, until now the Terraform still does not support this point clearly because Azure APIs aren't enough yet. But in my case, I want to route traffic to many back end pool as each function app. I try to pass default domain as fqdns. When Terraform runs completely, I recognize the Azure Gateway implicit know it's type of app service plan and adjust the type in the portal and work correctly as I expect.
backend_address_pool {
name = "<BACKEND_ADDRESS_POOL_NAME>"
fqdns = ["<DEFAULT_DOMAIN_NAME_FUNCTION_APP>"]
}
If one tries to (is in the questionably lucky situation of having to.......) relocate some patch activity from one repo to another repo with sufficiently different directory hierarchy layout, then possibly the best way is to:
create an interim temp commit to adapt the target repository to the layout required by the git am series
apply the patch series
remove the interim temp commit (via interactive rebase), or alternatively specifically revert it (whichever way is more suitable to express proper history requirements)
(thereby staying right within efficient fluid toolchain behaviour, rather than having to fight with "special" error state/situations such as *.rej files)
To get the size of the text, there's only this bit missing:
size = font_metrics.size(0, "daN/cm²")
Which will return the size of the bounding box required to print the string to the box. See the documentation about QFontMetrics for the specifics.
Note that as @musicamante mentionned in the comments, there probably will be more work to do to get the widget the right size to make sure all of the text is properly displayed in all environments.
Infinite value is the same as empty value and results in the same value as parent.
P.S. Child cannot set limits higher than parent (I struggled to understand it)
If you anytime feel unsure you could do this:
int3
push 0
int3
int3 is a brekpoint trap and you could use info registers in gdb, in this way you can check the difference between rsp and rsp after
That's a OneLake operation, not a Semantic Model operation. So not XMLA endpoint.
There's a REST API for that: https://learn.microsoft.com/en-us/rest/api/fabric/core/onelake-shortcuts
That was indeed help. Below are few ways to check not printable characters in a file
cat -A file.txt
cat -vte file.txt
or
od -c file.txt
Create an API 35 emulator without 16 kb Page Size Google APIs PlayStore. That's how I got passed that error
I create a repo that solve this problem feel free to use it: https://github.com/derevyan/aws-langchain-lambda-layer
The newly introduced function "CurryApplied" (https://reference.wolfram.com/language/ref/CurryApplied.html) may fit your need. CurryApplied[A,{2,1}][G] is exactly what you want.
For example:
Input:
A[F_, G_] := D[F, x] D[G, y];
Print[CurryApplied[A, 2][x][y^2]];
Print[CurryApplied[A, {2, 1}][y][x^2]];
Print[CurryApplied[A, {2, 1}][x^2][y]];
Output:
2y
2x
0
Add a div outside the canvas element:
<div><canvas id="zeitsteuerungChart"></canvas></div>
It should work now as well by pressing F5 or Right click on any of the file you are in subdirectory to refresh the same
I know this is an old post, but THANK YOU!!!! I've been fighting this exact same issue for 2 days, never thought to check the legacy MFA setup.
from sendgrid.helpers.mail import Mail, Asm
mail = Mail(
from_email='[email protected]',
to_emails='[email protected]',
subject='Your Subject Here',
html_content='<div>This is your email content.</div>'
)
asm = Asm(
group_id=123
)
message.asm = asm
I encountered an error related to #include <json/json.h> in my project. After reviewing the comments(by @IInspectable), I followed the suggestion of moving the #include <json/json.h> to the top of my includes, and the error was resolved. Thanks for the help!
Maybe its not possible to use OE in the compiler? It may already have the effect you are expecting. Checkout the ATF16V8's datasheet.
I am just learning Python and I'm surprised none of the experts have provided this explanation:
If you type
help(<function name>)
Python will return the docstring text, BUT NOT YOUR COMMENTS. It seems like this is the fundamental answer to your question. The other answers are helpful and nuanced, but I submit that what you were looking for is this.
As explained here, you might need to use sharex=True and sharey=True options, so that the proper plotting space is selected.
for my case i was experiencing a similar issue non_field_errors":["Session value state missing."] and resolved it by setting SESSION_COOKIE_SAMESITE = 'none' and SESSION_COOKIE_SECURE = False in my settings.py running on a development server localhost:5173 on the frontend and http://127.0.0.1:8000 on the backend
I might be very late, but I'm gonna give my answer here.
What you can do is construct the string yourself, starting from the most significant bit, if the current bit is true, then increment one to the string. Then multiply the number by two. Make sure you manage carrying digits.
Another (quicker) way is to use fundamental types for digits. Create a vector of a built-in integer type. Choose a power of ten as the max value. You can represent multiple digits at once with just one value. Then you can iterate over the values and concatenate the stringified values.
I found a solution for extracting date and time information from miniDV AVI files. Here’s the method I used:
Steps:
Search for the non-decimal identifier: Look for the sequence "ix01"
Locate "00db": After finding "ix01", search for "00db" in the file. Inside this cluster, you will find auxiliary data.
Extract Date: Search for 0x62 ("recdate"). It’s followed by 4 bytes that represent the date.
For example: 62 FF C2 E9 07 → 2007-09-02.
For example: 63 FF 84 C1 D8 → 18:41:04.
Time Encoding:
Minutes and Seconds are encoded as hexadecimal values with a shift:
8 = 0, 9 = 1, A = 2, B = 3, C = 4, D = 5 for MM and SS.
Hours are encoded in a similar way:
C = 0, D = 1 for HH.
Issue is still present even in TestnNG 7.11.0...
There is no way to get camera to focus or use macro camera, but what did work was setting the zoom value to 3. It uses the main camera but "seems" like a macro camera.
Found the solution! It was answered here, i had two different instances of the same plugin installed in my project...one was the original plugin and the 2nd was a fork that had a specific fix in it. But somehow the original got reloaded back in to the project when I dropped the ios platform and then re-added it:
It loads lot of data in form of .js file, this js file contains a big database in form of variable
This is an XY Problem. You should use a proper database and/or load the data in chunks. You are experiencing poor performance because your code is bad. Fix your code.
You have to make sure to close a Clip whenever possible.
If there are many Clips open simultaneously, you may experience strange errors.
I recommend this for sporadic playing:
//Should usually be executed in a separate, dedicated Thread.
clip.open(Source);
//Maybe wait a bit for the Clip to actually open
clip.start();
clip.drain();
clip.close();
I detected exactly the same behavior as you described above. My solution was checking the field type again and enusre your field type is DateTime instead of Date. Now your time can be stored as well.
Unfortunately, you cannot change the type of a field within the existing layer. You can either create a new field or change the field type with the function refactor fields.
x-cache order is the other way around, shield -> edge. If using NGWAF it will be waf -> shield -> edge .
So what you are seeing is a cache hit in the edge, and a miss on the shield (item should be cached on the shield too but since the request was fully served by the edge it never got to the shield)
You can extract the token using i.e. JSON Extractor and save it into a JMeter Variable
Then for the next request you can add the token as a header using HTTP Header Manager
With regards to "concurrent user login" - you can control the load in Thread Group
The below setup starts with 1 user and adds 1 user each second for 1 minute, then 60 users run together for 1 minute:
If your IDE enters power-saving mode, the color scheme may stop working. After trying multiple solutions, I realized the issue was solely caused by power-saving mode.
It could well be a Sparx-specific stereotype applied to metaclasses - https://sparxsystems.com/enterprise_architect_user_guide/17.0/modeling_frameworks/metaconstraint.html
This issue still exists in 2025. Windows 11, Visual Studio 17.13.2.
Answer https://stackoverflow.com/a/8379421/1878731 adapted to Rails 8:
require 'i18n'
if (Rails.env.development? || Rails.env.test?) && ENV['DEBUG_TRANSLATION']
module I18nDebug
def translate(key = nil, throw: false, raise: false, locale: nil, **options)
Rails.logger.debug "Translate: #{[key, { throw:, raise:, locale:, **options }].inspect}"
super
end
end
I18n.singleton_class.prepend(I18nDebug)
end
I use Legend-state to Fine-grained reactivity for minimal renders. Check out this link :
Just change Test to test in your test case methods!
It seems like there is now a documentation here: https://github.com/vuejs/language-tools/wiki/Vue-Compiler-Options
Struggling with the same. I can get module federation to work in RR7 by using the "runtime" package from the new module federation project, but unable to get the vite plugin to work.
https://module-federation.io/guide/basic/runtime.html
Maybe you could create a small repro and post an issue on the RR7 repo?
I was mapping on WRONG NAMES. So instead of Riskname I was using Rname... Sorry for anyone who was troubleshooting on my behalf
We had a similar issue when constrained to Spring Boot 2.7 and JDK 8. The solution was to use the fabric8 kubernetes-client directly, it has an easy-to-use LeaderElector
Version pkg:maven/io.fabric8/[email protected] is the last one that supports JDK 8:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>6.13.5</version>
</dependency>
Then in your @Service start the leader election:
public static final String serviceLeaderLeaseName = "my-service-leader-lease-name";
private final String leaseHolder = System.getenv("HOSTNAME");
private boolean serviceLeader = false;
@PostConstruct
public void initService() {
KubernetesClient client = new KubernetesClientBuilder().build();
LeaderCallbacks leaderCallbacks = new LeaderCallbacks(() -> {
serviceLeader = true;
logger.info("I am now the leader: {}", leaseHolder);
}, () -> {
serviceLeader = false;
logger.info("I am no longer the leader: {}", leaseHolder);
}, newLeader -> {
logger.info("The new leader is: {}", newLeader);
});
LeaderElectionConfig leaderElectionConfig = new LeaderElectionConfigBuilder()
.withName(serviceLeaderLeaseName)
.withLock(new LeaseLock(kubeNamespace, serviceLeaderLeaseName, leaseHolder))
.withLeaseDuration(Duration.ofSeconds(30))
.withRenewDeadline(Duration.ofSeconds(20))
.withRetryPeriod(Duration.ofSeconds(2))
.withReleaseOnCancel()
.withLeaderCallbacks(leaderCallbacks)
.build();
Executor executor = Executors.newSingleThreadExecutor();
LeaderElector leaderElector = new LeaderElectorBuilder(client, executor).withConfig(
leaderElectionConfig).build();
leaderElector.start();
}
public boolean isLeader() {
return serviceLeader;
}
This example uses a LeaseLock but there is also a ConfigMapLock if you prefer that.
Each replica will compete to lock the named lease. You'll be able to see which replica was elected as leader. If the leader goes away for any reason the lease will time out and a new replica becomes leader.
In your code check myService.isLeader() to see if the current instance is the leader.
Did you find the solution?
You can add the following on your list and then set it to active programatically
@State private var editMode = EditMode.active
List(...) {
...
}
.environment(\.editMode, $editMode)
Uh oh!
buffer.length should be buffer.length - 1
The index of an array starts at 0 but the length starts at 1, therefore, you must use buffer.length - 1.
Your program must wait until the Buffer has data available.
URL is deprecated, BTW.
InputStream is = new BufferedInputStream(uc.getInputStream());
while(is.count < 1024) ;
//do other stuff
is.close();
Where is `buffer` declared?
Perhaps a simple LOGGER.info("InputStream Successfully initialized!") or System.out.println("InputStream Successfully initialized!"); statement will give a large enough delay to let the buffer fill up.
Sometimes, the minuscule delay caused by a `print` statement can make a surprisingly large difference.
Discriminated unions allow you to encode a closed set of variants (which is what you're doing with the sealed class in Kotlin).
Try to add it via pip, just like you did with streamlit.
const messageUrl = whatsappLink+"?text=" + encodeURI(`Hello, I want to *pre-order* `) + '%0D%0A' + encodeURI(`Product Name: ${item?.name}`)
I was looking for a solution for WhatsApp API, and this works fine for me.
You need to update a couple of files to define the payment method facade in the payment plugin.
There is a naming convention between Adyen and Magento payment methods. A Magento payment method for Adyen plugin should start with adyen_ and followed by the payment method type name scalapay_3x.
adyen_scalapay_3x as Adyen\Payment\Model\Cart\Payment\AdditionalDataProvider\AdyenPmadyen_scalapay_3x.view/base/web/images/logos directory as scalapay_3x.svgAfter implementing those changes run the following commands.
bin/magento setup:di:compile
bin/magento setup:static-content:deploy -f
bin/magento adyen:enablepaymentmethods:run
bin/magento cache:clean
You can also check pull-request #2918 on the open-source Adyen Magento 2 plugin repository and find generic implementation document here if required.
I've had this problem in the past as well. I'm sure there's a setting to fix it but I've always just been lazy and call .quit from my code when I'm done with a connection. This runs the QUIT command on Redis and then Redis shuts down the connection immediately instead of wait for it to be timed out.
If your pipeline runs have names with a predefined structure (e.g. they always starts ProdRelease or something), you can search for all pipelines that have that string (assuming you can determine the run names from reading the YAML).
This only works if your pipeline has runs, however. If it's never been run or previous runs have been deleted then you're out of luck.
I recommend to manually open the Excel file that the LINK-fields in word refer to before you open the word document.
At least in my case with 95 fields it made them update instantly.
In case the problem remains you could try writing a procedure, that reads the excel sources from the link, opens excel, collects the values and directly writes them into the respective Fields().Result.Text - this way you can skip updating the fields but still change their displayed value and the LINK as such also remains.