As @Saddles pointed out in the comments, the WHERE
clause should go before the lablel like this =QUERY(earnings!A:J, "SELECT A, B, C, YEAR(C), toDate(C) where A = 'Brooklyn' label YEAR(C) 'Year', toDate(C) 'Month' format toDate(C) 'MMM'", 1)
Using the scric's code base, which I thank very much, I developed the following code that suits my needs well. I post it here for common use.
First, to simulate the 8 processes, I wrote a small code that, before exiting, waits for a number of seconds passed from the command line and that returns 0 or 1 depending on whether the number of seconds waited is even or odd. The code, saved in test_process.c, is the following:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
int main (int argc, char** argv)
{
int i, duration, quiet=0;
if ( (argc < 2)
|| ( (duration = atoi(argv[1])) <= 0 ))
return -1;
for (i=2; i<argc && !quiet; i++)
quiet = (0 == strcmp(argv[i], "quiet"));
if (! quiet) for (i=2; i<argc; i++) printf("par_%d='%s' ", i, argv[i]);
if (! quiet) printf("\nStart sleep for %d sec!\n", duration);
sleep(duration);
if (! quiet) printf("END sleep for %d sec!\n", duration);
return duration & 1;
}
and it is compiled with:
cc test_process.c -o test_process
Secondly, I took the scric's code and put it in a python script called parallel.py
#!/usr/bin/python3
import concurrent.futures
import subprocess
from datetime import datetime
from datetime import timedelta
class process :
def __init__ (self, cmd) :
self.invocation = cmd
self.duration = None
self.return_value = None
def __str__(self) :
return f"invocation = '{self.invocation}', \tduration = {self.duration} msec, \treturn_value = {self.return_value}\n"
def __repr__(self) :
return f"<process: invocation = '{self.invocation}', \tduration = {self.duration} msec, \treturn_value = {self.return_value}>\n"
pars = [process("1 quiet tanks 4 your support!"),
process("2 quiet 0xdead 0xbeef" ),
process("3 quiet three params here" ),
process("4 quiet 2 parameters " ),
process("5 quiet many parameters here: one two three" ),
process("6 quiet --1-- --6--" ),
process("7 quiet ----- -----" ),
process("8 quiet ===== =====")]
def run_process(string_command, index):
start_time = datetime.now()
process = subprocess.run((f'{string_command}'), shell=True, universal_newlines=True, stderr=subprocess.STDOUT)
end_time = datetime.now()
delta_time = end_time - start_time
return process.returncode, delta_time, index
def main():
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
futures = {executor.submit(run_process, f"./test_process {pars[i].invocation}", f"{i}"): i for i in range(8)}
for future in concurrent.futures.as_completed(futures):
result, time_taken, index = future.result()
pars[int(index)].duration = time_taken / timedelta(milliseconds=1)
pars[int(index)].return_value = result
print(f"{index}: result={result}, time_taken={time_taken}")
main()
print(pars)
Running parallel.py from the command line you get the following output:
./parallel.py
0: result=1, time_taken=0:00:01.006900
1: result=0, time_taken=0:00:02.003389
2: result=1, time_taken=0:00:03.013232
3: result=0, time_taken=0:00:04.003463
4: result=1, time_taken=0:00:05.002579
5: result=0, time_taken=0:00:06.002372
6: result=1, time_taken=0:00:07.021016
7: result=0, time_taken=0:00:08.003653
[<process: invocation = '1 quiet tanks 4 your support!', duration = 1006.9 msec, return_value = 1>
, <process: invocation = '2 quiet 0xdead 0xbeef', duration = 2003.389 msec, return_value = 0>
, <process: invocation = '3 quiet three params here', duration = 3013.232 msec, return_value = 1>
, <process: invocation = '4 quiet 2 parameters ', duration = 4003.463 msec, return_value = 0>
, <process: futures = {executor.submit(run_process, f"./test_process {pars[i].invocation}", f"{i}"): i for i in range(8)}
= '5 quiet many parameters here: one two three', duration = 5002.579 msec, return_value = 1>
, <process: invocation = '6 quiet --1-- --6--', duration = 6002.372 msec, return_value = 0>
, <process: invocation = '7 quiet ----- -----', duration = 7021.016 msec, return_value = 1>
, <process: invocation = '8 quiet ===== =====', duration = 8003.653 msec, return_value = 0>
]
In this way all the information I need is saved in the <pars> object.
If you need to call different processes, just put the name of the process in self.invocation and change the line
futures = {executor.submit(run_process, f"./test_process {pars[i].invocation}", f"{i}"): i for i in range(8)}
in
futures = {executor.submit(run_process, f"{pars[i].invocation}", f"{i}"): i for i in range(8)}
obviously changing the definition of pars in this way
pars = [process("./test_process 1 quiet tanks 4 your support!"),
process("./test_process 2 quiet 0xdead 0xbeef" ),
process("./test_process 3 quiet three params here" ),
process("./test_process 4 quiet 2 parameters " ),
process("./test_process 5 quiet many parameters here: one two three" ),
process("./test_process 6 quiet --1-- --6--" ),
process("./test_process 7 quiet ----- -----" ),
process("./test_process 8 quiet ===== =====")]
Thanks to everyone!
Instead of grid-template-columns: 1fr;, consider grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); for better responsiveness.
If images look off, adjust their height with object-fit: cover; or remove fixed height constraints.
Are you trying to produce the same as your current queries, except to only include the entityCode
instead of the complete vertices?
If so, I believe the solution you're looking for is tree().by('entityCode')
. There are some relevant examples in the reference docs for the tree step: https://tinkerpop.apache.org/docs/3.7.3/reference/#tree-step
You should try out Hyperbrowser (https://hyperbrowser.ai)
It does everything you need like browsers, captcha solving, proxies, stealth mode etc and also has AI agents built-in like Anthropic's claude computer use and OpenAI's CUA
All works as expected when renaming the file to hooks.server.ts
Go to Project settings-> Player -> Publishing Option -> Minify and select "debug" if u are still producing the game and finally when you need the official release set it to "release"
Best when used internally. I believe they use modules under the hood.
Better for sharing publicly with others (e.g. via npm package). There is a much better sharing ecosystem around modules.
Apparently the specific version of micro-meter (1.14.2) was problematic. Both downgrading or upgrading (1.14.5) solved the issue.
Indeed no manual registration (meaning using AspectJ explicitly is no more was required.
Once I got off the offending micro-meter version, both @Timed and @Counted worked fine out of the box.
When you call enfold()
method on a datetime object it actually set its fold
attribute to 1 which means that this datetime is the one after the shifting has occurred.
Actually python wont make any special interpretation for this, but this is very useful when adjusting the datetime back to utc using <datetime-object>.astimezone(timezone.utc)
let's clarify with an example
from dateutil import tz
from datetime import datetime
first_1am = datetime(2017, 11, 5, 1, 0, 0, tzinfo=eastern) # Ambiguous datetime object
# tz.datetime_ambiguous(first_1am) # Outputs: True
second_1am = tz.enfold(first_1am)
# If you simply try to subtract both
(second_1am - first_1am).total_seconds # outputs: 0.0
# However try shifting both to utc
(second_1am.astimezone(timezone.utc) - first_1am.astimezone(timezone.utc)).total_seconds()
# outputs: 3600.0
Answer was to change send to send_message and to import time so I can use delta time for the duration of the poll
Fixed Code:
import time
import datetime
@client.tree.command(name="repo", description="Create a poll to see who's free for REPO", guild=GUILD_ID)
async def repo(interaction: discord.Interaction):
p = discord.Poll(question="Are you free for REPO?", duration=datetime.timedelta(hours=4))
p.add_answer(text="Yes")
p.add_answer(text="No")
await interaction.response.send_message(poll=p)
I was facing the same issue which was cause of cache and litespeed via htaccess. I have fixed by rename my .htaccess file to .htaccess-disabled and created new .htaccess file and add code from
https://developer.wordpress.org/advanced-administration/server/web-server/httpd/#basic-wp
I used WP Basic code for my website as it is not subdomain:
# BEGIN WordPress
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPress
Choose htaccess code accordingly, Thank you!
I just ran your code in github codespaces and the logback works like charme there. So do you get the logs when executing locally but not when running in docker?
You need to call your venv's python executable from your subprocess as subprocess ignores venv. Add the full path of the venv's python (usually /your/path/here/Scripts/python
or your/path/here/bin/python
). You may also use sys.executable
to reference your python.
You need to enable long filenames
https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry
If you are using Windows install Microsoft redistributable package. here https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170. This will help you solve the issue.
There is actually a easy way to do this without using any external tools:
:'<,'>g/^/norm yy'>p'<dd
Do you need explanation how static classes work or do you know how they work now?
The way you wrote it only 1 User credentials can be saved at a time. If you have multiple users it would be better to have some data structure as storage of your users and pass reference to this structure in constructor of Form or you could have this structure as static attribute in your current Credentials class (this would be easier so you don't have to change a lot of your existing code)
for example you could use "public static List<(string, string)> list = new List<(string, string)>();"
to add items you just use "list.Add(("userEmail","userPassword"));". To get data you would have to iterate using loop of your choice. If you choose anything that isn't foreach you have to access data using indexer (like array) => list[indexOfUser].Item1/list[0].Item2; -with Item1 and Item2 you access each individual string, so for you Item1 is userEmail and Item2 is userPassword.
and for validation you can check if inserted email is in your list using loop and checking Item1. If you find a match check password.
You need to use ModelMetadataType instead of MetdataType attribute and the partial classes should be in the same namespace.
Answer here - https://stackoverflow.com/a/37375987/955688
URLSearchParams is an iterable. If you log it with your example you should see something like URLSearchParams {size: 2}
in the console.
To get a plain object you can do Object.fromEntries(params)
to get an object with the values (or empty if there are no params).
You need to use ModelMetadataType and not MetadataType attribute. Answered here: https://stackoverflow.com/a/37375987/955688
Please consider uploading small demo project on our forum so we can run it and reproduce the problem directly.
Disclosure: I work as Aspose.CAD developer at Aspose.
Using the binding.irb tool mentioned by @mechnicov, I was able to determine that instead of routing to the new method, I was getting an unauthorized viewer message. Turns out there was some legacy authorization code that I needed to account for in my tests.
Specifically, I added the pry gem to my Gemfile, updated the test to this:
describe 'GET #new' do
it 'simplified test' do
get :new
binding.pry
expect(assigns(:map_id)).to eq(1)
end
end
Then I ran the test, and examined the response object (which is what this controller is for), then investigated its contents.
Running privileged containers in Kubernetes introduces serious security concerns. Privileged containers can access the host system almost without restriction, which violates container isolation principles and opens the door to cluster takeovers.
---
### Why It's Dangerous
Setting `privileged: true` gives a container:
- All Linux kernel capabilities
- Access to the host's devices
- The ability to modify the host filesystem
- Potential to escape the container and take over the host
These risks are explained in more depth in this article:
[Privileged Container Escape – Attack Vector](https://k8s-security.geek-kb.com/docs/attack_vectors/privileged_container_escape)
---
### How to Mitigate
1. Block Privileged Containers with Admission Controllers
Use policy engines like:
- [Kyverno](https://k8s-security.geek-kb.com/docs/best_practices/cluster_setup_and_hardening/api_server_security/kyverno)
- [OPA Gatekeeper](https://k8s-security.geek-kb.com/docs/best_practices/cluster_setup_and_hardening/api_server_security/opa_gatekeeper)
You can write policies that deny any workload with `privileged: true`.
---
2. Apply Pod Security Standards (PSS)
Kubernetes 1.25+ comes with a built-in [Pod Security Admission (PSA)](https://k8s-security.geek-kb.com/docs/best_practices/cluster_setup_and_hardening/pod_security/pod_security_standards) controller.
Use the `restricted` profile to prevent privileged containers and many other unsafe configurations at the namespace level.
---
3. Audit Your Cluster
Use tools to scan for security issues, including privilege escalations:
- [kubeaudit](https://k8s-security.geek-kb.com/docs/best_practices/cluster_setup_and_hardening/pod_security/kubeaudit)
- [kubescape](https://k8s-security.geek-kb.com/docs/best_practices/cluster_setup_and_hardening/configuration_validation/kubescape)
- [Polaris](https://k8s-security.geek-kb.com/docs/best_practices/cluster_setup_and_hardening/configuration_validation/polaris)
---
### Summary
Avoid using privileged containers unless absolutely necessary. If you must, isolate them in separate namespaces with tight controls. For most workloads, it’s better to enable specific capabilities rather than granting full privileges.
For more Kubernetes security content:
[K8s Security Knowledge Base](https://k8s-security.geek-kb.com/)
Your question: What does it do and how does the code sample work?
Code translation:
if !ErrorHasOccured() || HandleError();
that code is equivalent to:
if (ErrorHasOccured()) HandleError();
How does it work? : In C trigraph,
??!
is equal to |
. So, ??!??!
means ||
P.S: This type of code was used when some of the keyboard did not have |
key.
result = sum(int(num) for num in numbers))
Try to take a look at:
https://grails.github.io/grails2-doc/2.3.2/guide/conf.html#configurationsAndDependencies
Especially the sub section "Disabling transitive dependency resolution".
I have been using newer version of Grails that uses Gradle, so I can't exactly remember the Gant way...
Use additional parameters extras
to work around this.
For example:
extras: "--inventory 'environments/dev/inventory-dev' --inventory 'environments/int/inventory-int'"
https://github.com/jenkinsci/ansible-plugin/issues/239#issuecomment-2427062898
While it isn't a warning (and it isn't a super uncommon practice for people to use local variables to shadow class fields), this should get picked up by code Code Analyzer CA1500.
This article explains how to enable code analyzers.
You can create your own custom identity verification workflows which can have it's own configuration but any of the ones provided by Docusign will be identical across accounts. Not all workflows may be available on all accounts, however.
Very odd, but I did the same exact thing you suggested and it fixed everything for me too!
I have a LG tv, but where I can find apps to install in this tv, the apps that appears in this list, don't be enough
I get this error when trryig to preview reports/layouts :
Unable to cast COM object of type 'System.__ComObject' to interface type 'Sage.Reporting.Engine.Integration.IBackupNotificationService'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{61552EBA-29AA-4A8B-8E77-0E8375943D7A}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
What to do
Soluction: https://github.com/twbs/bootstrap/issues/33636#issuecomment-2088899114
.table-responsive {
overflow: auto;
.dropdown {
@extend .position-static;
}
}
It turned out that newer versions of Windows (10 and 11) apparently do not allow showing mapped drives unless specifically set.
To address this, navigate in registry to:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
Create a new DWORD value called EnableLinkedConnections and set its value to 1.
Restart the computer and then the mapped network drives will show in the dialog box with correct drive letters.
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4enter imaenter image description herege description henter image description hereere |
Yes 'ridk enable ucrt64' not being run before EVERY 'tebako press' was my problem.
Closing this and reopening with a new set of problems
Thanks
Did you ever get an answer to this? I've been having the same problem. If I comment out the following it does not crash.
.backgroundTask(.appRefresh("backgroundTask")){
//run task in background
}
Follow the link in you error to see a list of potential issues. Did you verify that a connection to jfrog artifactory can be established from your build server and the dependency is hosted by your artifactory?
BTW the mentioned artifact org.apache.activemq:activemq-pool:jar:5.7.0
was released on Oct 02, 2012. Are you sure you want to build on such an outdated library instead of using a newer version (like 6.1.6)?
Its finally fixed , but i have anther issue , maybe you will have
this is the solution
Failed to find Platform SDK with path: platforms;android-35
FAILURE: Build failed with an exception.
* What went wrong:
Could not determine the dependencies of task ':app:compileDebugJavaWithJavac'.
\> Failed to find Platform SDK with path: platforms;android-35
You can also deserialize to System.Text.Json.Nodes.JsonObject which supports indexers, allowing you to access any property:
var json = """{ "token": "ab123" }""";
var jsonObj = JsonSerializer.Deserialize<JsonObject>(json);
var token = y?["token"]?.GetValue<string>();
I found the answer to my question. While Google's API allows for batching a lot of different requests, the endpoint does not allow exporting requests. I found the better way is to make multiple requests at once to reduce my runtime, although this does end up using more bandwidth.
So what could be done is to control each slot's Application Insight settings with environment variables (https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-config for Java apps).
What I did is to add environment variable APPLICATIONINSIGHTS_METRIC_INTERVAL_SECONDS=<something_really_big_here>
to dramatically decrease the frequency of reported metrics, hence the size of data ingested. A little bit hacky, but does the trick.
I'm also running into a problem. Im still a giant noob at this, having just started, but I wanted to use this command. What do spots do I fill out, or, what do I replace? Where do I put the uninstall string, what words do I delete, and how can I get the software name if that's required?
$software = Read-Host "Software you want to remove"
$paths = 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall', 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall'
Get-ChildItem $paths |
Where-Object{ $_.GetValue('DisplayName') -match "$software" } |
ForEach-Object{
$uninstallString = $_.GetValue('UninstallString') + ' /quiet /norestart'
Write-Host $uninstallString
& "C:\Windows\SYSTEM32\cmd.exe" /c $uninstallString
}
Having tested around this seems to be an issue when using an external keyboard with an Android Studio emulator.
I've managed to reproduce the infinite loop when typing using an external keyboard in both my production app and a brand new app, using the code in the question.
Using the emulator keyboard and using a real device keyboard doesn't cause the infinite loop issue when using a TextFieldValue.
I can only assume this is a bug with the Emulator.
According to this article:
https://learn.microsoft.com/en-us/cpp/windows/determining-which-dlls-to-redistribute?view=msvc-170
It says:
Visual Studio 2022, 2019, 2017 and 2015 all have compatible toolset version numbers. For these versions, any newer Visual Studio Redistributable files may be used by apps built by a toolset from an older version. For example, Visual Studio 2022 Redistributable files may be used by apps built by using the Visual Studio 2017 or 2015 toolset. While they may be compatible, we don't support using older Redistributable files in apps built by using a newer toolset. For example, using the 2017 Redistributable files in apps built by using the 2019 toolset isn't supported.
So technically, all apps relying on redistribution package of 2015 should work with newer versions. The only thing that makes me skeptical is using may be keyword in their documentation. Therefore, please be cautious with that since this is a big change :)
The "true" is not correct!
Try "false"! But: the registration will always be successful - but you will not trigger a call nor fetching an incoming call - thats my findouts from today (March 24, 2025).
BTW: I am trying to register to my Fritz.Box and to fetch then a incoming call. no luck at all --- currently.
Anyone else has a working test app?
var sp = new SIPAccount(true, registerName, registerName, registerName, registerName, domainHost, 5060);
The call to the /api/2.2/jobs/run-now REST API only triggers the job. You'll need to call different APIs to get the output. The call to jobs/run-now should return a run ID.
If that's successful, then the next step are to:
- check the status of the job to make sure it's completed running using this API: /api/2.2/jobs/runs/get. You may have to loop until the job is done or failed.
- once the job is done, you can get the output for that run using this API: /api/2.2/jobs/runs/get-output
One line in your fitness function looks suspicious.
private long fitness(Genotype<BitGene> genotype) {
var bitChromosome = genotype.chromosome().as(BitChromosome.class);
// Is this a field in your class? Should be a local variable.
variables = bitChromosome.stream()
.mapToInt(gene -> gene.bit() ? 1 : 0)
.toArray();
var objects = dataModel.getObjects();
...
}
If variable
is defined outside your the fitness
function, it will be shared between several threads during the evaluation. This will lead to an undefined behavior and might explain your results. The fitness function must thread safe and/or re-entrant.
private long fitness(Genotype<BitGene> genotype) {
var bitChromosome = genotype.chromosome().as(BitChromosome.class);
// Make it a local variable.
var variables = bitChromosome.stream()
.mapToInt(gene -> gene.bit() ? 1 : 0)
.toArray();
var objects = dataModel.getObjects();
...
}
Regards, Franz
I tried to make it work by modifying my build files but nothing. I finally had to remove the library and clean the project so that my build passes.
Reviving this thread I've been testing the same in my iOS app which uses the Mapbox SDK. Anyone have success bringing their own DEM sources in Mapbox?
After adding a custom DEM with elevations encoded in RGB according to the Mapbox formula (height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1), I do see elevation changes on my map when 3D is toggled on, but they are widely exaggerated spikes (mountains are something like hundreds of KM tall). As a hack, I tried reducing terrain exaggeration by 100x but it nearly flattened them. Iterative values in between resulted in non-useful spikes, so I think I'm barking up the wrong tree.
Any ideas?
if !mapView.mapboxMap.sourceExists(withId: "my-custom-dem") {
var source = RasterDemSource(id: "my-custom-dem")
source.url = "mapbox://username.my-custom-dem"
source.encoding = .mapbox // Tells Mapbox how to decode the RGB values
source.maxzoom = 16.0
source.minzoom = 0.0 // Allow full zoom range from 0-16
As a good option to build rebuild Kaniko image
FROM alpine:latest AS build
RUN apk add --no-cache git
FROM gcr.io/kaniko-project/executor:v1.23.2-debug
COPY --from=build /usr/bin/git /usr/bin/git
This answer is different than the others posted here. Starting with Angular 17, the RouterTestingModule
is deprecated. For my use case, after upgrading to Angular 17, my VsCode began flagging this deprecation:
RouterTestingModule must be replaced by provideRouter
, but note this is a provider now, not an import. So the following error was thrown by the Jasmine compiler when provideRouter was placed under imports:
The solution was to simply move the provider to where it should have gone (under providers) - and the error was no longer thrown:
This blog entry shows a full example using provideRouter.
You can try using the eval() function, it evaluates a string, and executes it as python code if it is valid as python code.
It seems the default is 5 minutes:
internal sealed partial class DefaultHybridCache : HybridCache
{
internal const int DefaultExpirationMinutes = 5;
I am getting the same error in my release pipeline while trying to run the dataform --run, i am using service account key to authenticate to run dataform deployment to google cloud using ADO release pipeline. I have tried the code from my local machine and it worked fine, having this issue only when i try to run it from ADO release pipeline, I have installed node.js and also npm, the compile is running fine telling all the changes the sqlx file is going to perform(actions), but when i try dataform --run , throwing the below error
Dataform encountered an error: Unexpected property "type", or property value type of "string" is incorrect.[0m
2025-03-22T13:55:25.4594218Z [91mReferenceError: Unexpected property "type", or property value type of "string" is incorrect.
2025-03-22T13:55:25.4595203Z at /azp/_work/r21/a/_Dataform Build/Build_Dataform_Artifacts/.npm/node_modules/@dataform/cli/bundle.js:137:23
2025-03-22T13:55:25.4595912Z at Array.forEach (<anonymous>)
2025-03-22T13:55:25.4596630Z at checkFields (/azp/_work/r21/a/_Dataform Build/Build_Dataform_Artifacts/.npm/node_modules/@dataform/cli/bundle.js:118:33)
2025-03-22T13:55:25.4598307Z at verifyObjectMatchesProto (/azp/_work/r21/a/_Dataform
Understanding the error given by Typescript
is crucial. It is trying to say when your code expects something of A
type, you are giving it something of B
type.
One of the easiest way to solve this issue is by matching the type of the incoming
object with the type of the your defined/expected
object.
There are several ways to achieve this solution. One, of the way is to make a helper function to convert your output into desired output.
//helper function to convert from string type to GraphQLCode type
function toGraphQLCode(value: string): GraphQLCode {
const enumValues = Object.values(GraphQLCode).filter((v) => typeof v === "string");
if (enumValues.includes(value)) {
return value as GraphQLCode;
}
throw new Error(`Invalid GraphQLCode value: ${value}`);
}
interface MyArrayItem {
code: GraphQLCode;
// ...other fields
}
const myArray: MyArrayItem[] = [];
// codeMapper that returns strings that matches enum values
const codeMapper = {
someKey: "SOME_VALUES",
// Add other mappings....
} as const;
// Example usage in your resolver or logic
const code = "someKey"; // Your logic/input
myArray.push({
code: toGraphQLCode(codeMapper[code as keyof typeof codeMapper]),
// ...other fields
});
// Example resolver (if this is part of a GraphQL resolver)
const resolvers = {
Query: {
myResolver: () => {
const myArray: MyArrayItem[] = [];
const code = "someKey";
myArray.push({
code: toGraphQLCode(codeMapper[code as keyof typeof codeMapper]),
// ...other fields
});
return myArray;
},
},
};
I guess this sample code should help you out. Pls let me know if the error still exists. We can debug further!
My question is it is another version of powershell or module or tool or what?
According to official documentation:
A .NET tool is a special NuGet package that contains a console application.
This applies to PowerShell. dotnet tool
, like python -m pip
and myriad alternative examples across programming languages' official implementations, is the official package manager for DotNet (.NET).
Consequently, the aforementioned command installs PowerShell in a manner that your OS's package manager doesn't understand, but which is standard amongst .NET packages.
And another question is that without this Powershell(dotnet global) – Will I not be able to install any module?
It supports modules installed via Install-Module
.
cursor.commit()
This command was needed at the end as per commenter.
Tried all the suggested methods but none worked for me. What finally did was going to Xcode -> Settings -> Accounts and signing in again with my dev account. After that previews started working again.
For me, if I imported IonDatetime
, that ionChange
event will not fire. If I omitted the import, the event fired.
Depends on what you are trying to achieve. A good example of setting min/height values would be for situations with lesser content. Sticky footers are what come to mind... https://css-tricks.com/couple-takes-sticky-footer/
Create a snippet of your menu issue and you may get a better answer.
QNX is in the process of moving all of our porting activity to GitHub. Try out this README for Boost specifically:
https://github.com/qnx-ports/build-files/tree/main/ports/boost
If you are still encountering this issue, just manually register the App\Providers\FortifyServiceProvider::class
inside of bootstrap/providers
directory and run php artisan optimize:clear
and that should solve the issue
android:usesCleartextTraffic="true"
use this in android/app/main/AndroidManifest.xml
worked for me
Is there a reason it needs to be detected within the iframe? Because you could try detecting the input in the parent window and checking if the iframe is active.
Go to File> Preferences > Settings, You'll see Extensions> Emmet > scroll down and you'll find Preferences. Click Edit in settings.json Use the code from below:
"emmet.preferences": {
"output.inlineBreak": 1,
},
these two headers are mandatory
Content-Type: application/json
aeg-event-type: Notification
My ISP blocks traffic on specific ports (including port 80). By changing the port, my site is accessible from the outside!
The root cause of the problem was that I was didn't change the job name between runs. Thanks to @JayashankarGS's answer for showing, correctly, an example with updated job names: job_name3
, job_name5
, etc. The YAML file and command()
statement in the original question work correctly when name
is changed to be unique or omitted entirely.
So I reported this also to the SAS Support and the result of the investigation/discussion is:
SAS R&D [...] confirm that what you have experienced is a bug
(SAS R&D = SAS Research and Development Division) as well as
SAS R&D have now confirmed that this issue should be fixed in the 2025.03 release of SAS Studio
So: Case (provisionally) closed. 🕵️♂️
It isn't the responsibility of the data layer to handle the user input.
If we are talking about user input try to add uniqueness validation into front end form or on the backend handler.
The better to use both options. The front end checks uniqueness among editing pairs but the backend handler checks uniqueness among all existing pairs (it needs additional query to the database).
When validations are passed then you can safely send the data to the data layer for saving.
There are some fine and informative solutions slaready,. but I thoguht I'd share my solution. This solution is a method which can be added to any mode class, to give single instances a rough equivalent to the queryset .update(...)
method, with the same arguments syntax. It makes use of the update_fields
keyword argument to the model's save method, which enables more efficient behind-the-scenes database updating.
Essentially, it is a wrapper around calling instance.save(...)
(whether you have overridden it in your model or not) that behaves the same, argument-wise, as queryset.update(...)
, and is more effient that calling the "full" save method (for most purposes anyway), and also calls the pre- and post-save signals (but provides an argument for ignoring these like a true queryset update, as well). It also allows passing in an arbitrary number of dictionaries as positional args, which will be automatically converted to keyword args for you.
from django.db.models import ForeignKey, ManyToManyField, OneToOneField, JSONField
from django.contrib.postgres.fields import ArrayField
from django.db.models.manager import Manager
class YourModel(models.Model):
...
def log(self, *args, exception=None, **kwargs):
""" Optionally, usethis method to define how to use your logging setup to log messages related to this
instance specifically. For this SO opost I wuill simply assume 'you've got a logger defined globally,
and this method calls it. But some creative logging could produce highly useful organizational/
informational enhancements. """
(logger.exception if isinstance(exception, Exception) else logger.log)(*args, instance=self, **kwargs)
# Presuming that you've setup your logging to accept
# an instance object, which modifies where it is logged
# to, or something. Feel free to modify this method however
# you see fit.
def update(self, *update_by_dicts, skip_signals=False, use_query=False, refresh_instance=True,
validate_kwargs=False, allow_updating_relations=True, **kwargs):
"""
update instance method
This method enables the calling of instance.update(...) in approximately the same manner
as the update method of a queryset, allowing seamless behavior for updating either a
query or a single instance that's been loaded into memory. It provides options via the
keyword args as described below.
NOTE that providing positional (non-keyword) arguments can be done; if it is, they must each be
a dictionary, which will be unpacked into keywpord arguments as if each key/value pair
had been passed in as a keyword argument to this method.
Args:
skip_signals: If True, then both the pre- and post-save siognals will be ignored after .save is
called at the end of this method's execution (the behavior of a queryset's update method).
You can also pass this in as 'pre' or 'post'; if you do, then the pre_save/ or post_save
signal, respectively, will be skipped, while the other will execute. The default value for
this argument is False, meaning that both pre and post_save are called, like a normal save
method call.
use_query: Normally, this method obviates the need to query "self" (which has already been loaded
fr0om the database into an instance variable, after all) by utilizing the save method, but if
for some reason you wouold prefer not to have this behavior, passing in use_query=True will
cause the method to use a different approach, and will self-query using the ORM, and then call
the typical update method on the resulting one-element queryset. In this case, signals will be
skipped regardless of the value specifyt for the skip_signals argument. However, any positional
argument dicts provided will still be unpacked as passed in as keyword args.
refresh_instance: Only does anything if use_query is True; then, if refresh_instance is True, it will
call refresh_from_db after the ORM update(s), to make sure this instance isn't outdarted. If you're
not goiong to use the instance anymore afterwards, specifying refresh_instance=False saves some time
since it won't re-query it from the database.
validate_kwargs: Normally, all keyword arguments (and keys from positional argument dicts, if any)
will be blindly passed to self.save. Hpwever, if there is any chance that keys supplied may contain
values that do not correspond to existing fields in the model (such as in some system using
polymorphism or other forms of inheritance, where each child model may have some fields unique only to
that model), you can specify validate_fields=True to check all of the fields against their presence in the
instance (using hasattr and discrading if it returns False); specifying a list of argument names instead
will only check those arguments. This adda a nominal amount of overhead time cost to the execution of
Python, so it shoyuld ony be used if it is needed, but it solves a couple of issues related to model
inheritance and/or polymorphism, and protects against dynamic instance updating situations going wrong, too.
allow_updating_relations: If True (which is the default), it enables passing in fields of models accessed through
relations (like ForeignKeys or ManyToManyFields, or their reverse relationships), via standard Djkango query
syntax using double underscores. These updates are done through a normal ORM queryset update, for
efficiency. If False, any argument whose name contains double underscores will not be valid, unless
it is use to reference a key in a JSONField, or an index in an ArrayField.
* PLEASE NOTE that manytomany and reverse foreignkey fields WILL NOT WORK without being validated by providing
validate_kwargs=True or including the related_name relation set manager name in the list provided to validate
kwargs.
kwargs: All other keyword arguments will be interpreted as field names to update, and the values to update them
to. Please note also that any positional argument dicts will be unpacked and literally merged in with
the kwargs dict, with priority in the case of duplicate keys being given to those given in kwargs.
Returns:
A dict of the fields that were not successfully updated with the error message associated with why it failed.
All expected potential exceptions are caught and logged and gracefully handled/logged, to avoid interruption
of your app/program, so the return value can be used to tell you if there were any issues.
"""
if skip_signals is True:
skip_signals = set(['pre_save', 'post_save'])
elif isinstance(skip_signals, str):
skip_signals = set([skip_signals])
elif isinstance(skip_signals, (list, tuple)):
skip_signals = set(skip_signals)
failed_keys = set()
separate_queries = []
self_query_updates = dict()
# Merging all keys and their values from any posotional dictionaries provided, directly into kwargs for ease of
# processing later.
for more_args in update_by_dicts:
for key, val in more_args.items():
if key not in kwargs: # If key was passed explicitly as a kwarg, then we prioritize that and ignore it here
kwargs[key] = val
# If argument validation is desired, we'll do that here, and log any removed after removing those entries from kwargs
if validate_kwargs is True:
# If it is the literal value True, we'll convert it to a list containing the names of every field we requested to update
validate_kwargs = [ key for key in kwargs ]
if validate_kwargs:
# Unless it is False/None, we have at least one field to validate, and will do so now using hasattr, and delete amy
# key/value pairs where the key is not a valid name of an attribute in this instance (or a relation, if applicable).
for field_name in validate_kwargs:
if not (result := check_field_name_validity(self, field_name, value, allow_relations=allow_updating_relations)):
failed_keys.add(field_name)
del kwargs[field_name]
else:
if isinstance(result, dict):
# This is a related query, and the function has returned information about that query
separate_queries.append(result)
del kwargs[field_name] # Deleting from kwargs, since it will be called due to its reference in separate_queries
#elif (result is True) and use_query:
# # Converting any True result to dicts representing ORM queries to use, if use_query argument is True
# separate_queries.append({
# 'type': 'self',
# 'manager': type(self).objects.filter(pk=self.pk),
# 'update_statement': field_name,
# })
# del kwargs[field_name]
if len(kwargs) > 0:
upd_fields = set()
if isinstance(skip_signals, list):
self.__skip_signals = skip_signals
else:
try:
del self.__skip_signals
except:
pass
for field_name, value in kwargs.items():
# Looping through any keys remaining in kwargs in order to modify this instance's fields accordingly, and then call
# self.save(update_fields=[...]) to perform a database UPDATE only on the changed fields for efficiency.
# If one or both save signals are to be skipped, we'll add attributes to the instance; I will lave it to the reader to
# modify the signal receiver(s) to check for the presence of said attributes, and return without doing anything if they're
# present and True.
# If any exceptions are raised, we'll catch, log, and inform the caller in the returned list of failed field names
try:
if use_query:
self_query_updates[field_name] = value # Advantage to using a self query is we don't pre-process, just execute updates as-is
else:
if '__' in field_name:
# traversing the path of indexes/attrbute names for the field, since it has double underwcores. The code below will handle JSONFields, ArrayFields,
# and relations for ForeignKeys and OneToOneFields, in the case that those fields were not validated
path_toks = field_name.split('__')
real_obj = None
if hasattr(self, path_toks[0]) and isinstance(getattr(self, path_toks[0]), models.Model):
result = check_field_name_validity(self, field_name, value, allow_relations=allow_updating_relations)
if not result:
raise ValueError(f"Non-validated kwarg '{field_name}' appears to be a related instance, but there was a problem with the field")
else:
separate_queries.append(result) # If it validates, adding it to separate_queries to avoid code duplication
continue
for attr in path_toks:
real_obj = self if not real_obj else ( else real_obj[last_key])
last_key = int(attr) if attr.isdigit() else attr
else:
upd_fields.add(path_toks[0]) # Add just the root of the field's 'path', as that is the JSONField/etc that we'll update in save()
real_obj[last_key] = value
else:
setattr(self, field_name, value)
upd_fields.add(field_name)
except Exception as e:
self.log("Exception encountered during processing of field '{field_name}'", exception=e)
failed_keys.add(field_name)
if use_query:
# Executing "self-query" by filtering model class for the single instance and using atomic update method of queryset
type(self).objects.filter(pk=self.pk).update(**self_query_updates)
# Then, refreshing the instance from DB so we don't have outdated field values (unless not needed, via arguments)
if refresh_instance:
self.refresh_from_db()
else:
# Finally, calling save with update_fields
self.save(update_fields=upd_fields)
try:
del self.__skip_signals
except:
pass
# Lastly, executing whatever separate queries may have been requested due to related model fields
for qrydef in separate_queries:
try:
qrydef['manager'].update(**qrydef['update_statement'])
except Exception as e:
self.log(f"Exception while processing separarte model query defined by {qrydef}", exception=e)
faled_keys.add(qrydef['key'])
return failed_keys
(Outside of the model class definition)
def check_field_name_validity(instance, field_name, value, allow_relations=True):
if not hasattr(instance, field_name):
if '__' in field_name:
path_tokens = field_name.split('__')
arg_valid = False
try:
if hasattr(instance, path_tokens[0]):
# Usage of isinstance allows subclasses of these field types to be recognized
obj = getattr(instance, path_tokens[0])
if isinstance(instance._meta.get_field(path_tokens[0]), JSONField):
if len(path_tokens) > 1:
for nextkey in path_tokens[1:]:
obj = obj[nextkey]
# If we've made it to the end of the path of keys, then this argument is valid
return True
elif isinstance(instance._meta.get_field(path_tokens[0]), ArrayField):
value = obj
for index in [ (int(x) if x.isdigit() else x) for x in path_tokens[1:] ]:
# We converted each path "token" from the split on double underscore into
# an integer if it is a str representation of one, else left it as a str;
# this allows nested ArrayFields or ArrayFields made up of DictFields.
value = value[index]
# If we've made it to the end of the path of keys, then this argument is valid
return True
elif isinstance(instance._meta.get_field(path_tokens[0]), Manager):
# The fact that its class is Manager means it is a reverse relation, so we'll
# need to check the validity of the rest of it by seeing if a query on it results
# in an exception. We'll use __isnull since it should work for any valid field
if not allow_relations:
raise TypeError(f"Field '{path_tokens[0]}' of field name '{field_name}' is a related set manager, but allow_relations is False")
search_path = "__".join(path_tokens[1:])
try:
if not obj.filter(**{qry_path + '__isnull'): False).exists():
raise ValueError(f"Relation field argument '{field_name}' does not exist or the query results in no matches")
except Exception as e:
raise e
else:
return {
'key': field_name,
'type': 'collection',
'manager': obj,
'update_statement': {"__".join(path_tokens[1:]): value},
}
else:
# We'll assume it is a OneToOne/ForeignKey field and if it's not it will error, which tells us it's invalid.
# 'obj' should contain the followed reference to the related instance already, if so.
# We'll recursively call this function on the reated object and return the result. Recursion is a beautiful thing!
# If you don't know, now you know.
if not allow_relations:
raise TypeError(f"Field '{path_tokens[0]}' of field name '{field_name}' is a related model instance, but allow_relations is False")
arg_valid = check_field_name_validity(obj, "__".join(path_tokens)[1:], value, allow_relations=True)
if arg_valid:
return {
'key': field_name,
'type': 'instance',
'manager': type(obj).objects.filter(pk=obj.pk),
'update_statement': {"__".join(path_tokens[1:]): value},
}
else:
raise ValueError(f"Field '{field_name}' cannot be found in the instance nor any related managers or instances")
except Exception as e:
# If any exception is caught at all, it means this is not a valid argument, and we'll log the exception and remove it from kwargs
instance.log(f"Field '{field_name}' is invalid. Reason = {e}", exception=e)
return False
*NOTE: this 'is similar to what I use in my web platform I've been developing for the past couple of years, but I made changes that I havcen't tested and am prone to typos. If you use this and find issues, please let me know and I'll fix them in this post.
As pointed out in the comments, the issue stems from a breaking change introduced by setuptools>=78
. A workaround is to use the PIP_CONSTRAINT
environment variable to tell pip to use a lower version of setuptools
. For instance, in a file named pip-constraint.txt
:
setuptools<78
and then:
PIP_CONSTRAINT=pip-constraint.txt pip install stringcase
works for any python version.
Turns out this was a permissions issue against the artifact respository that I use. Temporarily resetting my npmrc and hitting the normal npm respository fixed this.
Apologies, I wasn't clear so I created a debate around data validation, as I say my hands are tied to the branch system we are supplied, and being NHS, the options are limited.
Thanks to the suggestion from @ThomA, I used a windowed function Count to get what I was looking for, as mentioned from Alan, the second part was a simple join with a null check so here is the final query to obtain the list.
WITH cte AS (
SELECT PatientId, CardId, Surname, Forenames, DateOfBirth, PostCode, Branch,
COUNT(CardId) OVER(PARTITION BY Surname, ForeNames, DateOfBirth, PostCode) as [records]
FROM pmr.Patient
WHERE Branch = 9
)
SELECT cte.* FROM cte
LEFT JOIN pmr.[Session] s ON cte.PatientId = s.Patient AND cte.Branch = s.Branch
WHERE records > 1 AND s.Patient IS NULL
ORDER BY Surname, Forenames
Gemma 3 requires transformers version 4.45.0.dev
. Please install this specific version using the provided command - !pip install git+https://github.com/huggingface/[email protected]
and try again. I have tried replicating the error and able to get the tokenizers successfully. See the attached gist for more details. Thank you
I find it rather funny (not) that there is such an incompatibility / unfriendly behaviour of git am when working with CRLF files:
foo.cpp: C source, ASCII text, with CRLF line terminators
I'd have expected to be able to do a plain format-patch, immediately followed by a corresponding plain git am, without any deviation shenanigans occurring. That would have been proper usability.
However, doing so fails both in another repository and in the original repository itself (when sitting at the correct pre-commit revision) - with both repos having identical .git/config settings.
patch -p[X] < foo.patch
however does work (but of course one will be missing out on the full cooked toolchain-provided commit handling then).
git am --keep-cr (thanks!) appears to work, but with
warning: quoted CRLF detected
message.
git-svn repository here as well, so maybe that's the complication.
composer require akshay/laravel-url-maintenance
The akshay/laravel-url-maintenance package allows you to easily put a specific URL or route of your Laravel application under maintenance or bring it back up. It provides Artisan commands to manage site maintenance on a per-route basis.
Know that it will fail. I mean, how often are you making changes anyway??? When it fails, replay the build and hit it again. Hit it twice to make it nice.
Some instances cannot be resolved with a depends_on like moving a subnet from one NAT gateway to another. No way around it, you have a 10-second outage and you have to apply twice. -target
on the one you're removing it from and subsequent -target
on the one that is receiving will reduce the outage...
Use a newer version of terraform:
https://developer.hashicorp.com/terraform/language/meta-arguments/depends_on
Note: Module support for depends_on
was added in Terraform version 0.13, and prior versions can only use it with resources.
You can change the mustache that handles the model naming. Not easy but this would be a good alternative as opposed getting rid of it.
I have never seed names like this. Maybe you have some other mistakes that confuse the generator to add the number sufix ?
GPT chat suggested brew install jimtcl
Not sure you're still interested in an answer, but here are some thoughts.
'rules.value.list' seems to accept single values, or modalities, only. So your lines would not work. As I understand it, these rules are based on links between variables, conditional syntheses if you wish. This does not allow to restrict the range of values taken by variables, or give conditions and restrictions to single variables. I myself wish it would do so...
Maybe one of the reasons is that, by allowing so, values between the original and the synthesized datasets would differ so much that utility would be dramatically reduced (for instance if values like 900 or 1000 have to be synthesized to 700 maximum...). What you could is modify your variables in the original dataset so that all values above 700 actually are given the value 700?
As for your other issue, why not not synthesizing the variable 'net' at all, then sum the two 'payed' and 'received' variables in your final synthesized dataset? Sounds to me like the easiest solution.
Hope that helps
This package might be a great fit for your needs: https://github.com/TypiCMS/NestableCollection.
Yep same here, refreshing fine on Desktop, fail on Service, error message:
Data source error:The following system error occurred: Type mismatch. Table: POGoodsR.Cluster URI:WABI-NORTH-EUROPE-E-PRIMARY-redirect.analysis.windows.netActivity ID:b418b01a-90aa-48f3-8fbe-67db084ecd22Request ID:46f6bae7-6c08-4a65-9c2c-97ac94a059b6Time:2025-03-24 16:11:25Z
No solution yet 🤷♂️
You can use the mui ClickAwayListener API.
I'd love a modified Firefox fork with Node.js built in. But unfortunately no one has made one.
You can look at alternatives:
Maybe one of those will work better, but @sysrage is right, you should work on storing and retrieving your data in a better way.
Asciidoctor PDF currently only supports custom roles for styling paragraphs and phrases. If you need to add a role to a table and have that role affect the table style, you must create an extended converter that applies the style appropriately. Fortunately, there is an example in the documentation.
Instead of using basepath: "./company/the-tool", try setting basepath: "". TanStack Router is already handling relative paths based on the current URL. Since your app is under /company/the-tool, it should automatically resolve the correct routes
It is not currently implemented. See this feature request, https://issuetracker.google.com/375867285.
Not sure if you found an answer for this but I've had ZQ620 printers intermittenly disconnect from bluetooth on all devices used. It turned out to be an issue with the LinkOS version the printer was using had an issue and needed that to be updated on the physical unit. Once the latest version was installed had no issues being recognized after that.
Basic support for DuckDB was introduced in DbVisualizer 25.1.
The exact name property in Oracle JDBC is "defaultRowPrefetch". Try with that name and also making it part of the url with url?defaultRowPrefetch=1000 in case spark is not parsing it ok
Unfortunately, until now the Terraform still does not support this point clearly because Azure APIs aren't enough yet. But in my case, I want to route traffic to many back end pool as each function app. I try to pass default domain as fqdns. When Terraform runs completely, I recognize the Azure Gateway implicit know it's type of app service plan and adjust the type in the portal and work correctly as I expect.
backend_address_pool {
name = "<BACKEND_ADDRESS_POOL_NAME>"
fqdns = ["<DEFAULT_DOMAIN_NAME_FUNCTION_APP>"]
}
If one tries to (is in the questionably lucky situation of having to.......) relocate some patch activity from one repo to another repo with sufficiently different directory hierarchy layout, then possibly the best way is to:
create an interim temp commit to adapt the target repository to the layout required by the git am series
apply the patch series
remove the interim temp commit (via interactive rebase), or alternatively specifically revert it (whichever way is more suitable to express proper history requirements)
(thereby staying right within efficient fluid toolchain behaviour, rather than having to fight with "special" error state/situations such as *.rej files)
To get the size of the text, there's only this bit missing:
size = font_metrics.size(0, "daN/cm²")
Which will return the size of the bounding box required to print the string to the box. See the documentation about QFontMetrics for the specifics.
Note that as @musicamante mentionned in the comments, there probably will be more work to do to get the widget the right size to make sure all of the text is properly displayed in all environments.
Infinite
value is the same as empty value and results in the same value as parent.
P.S. Child cannot set limits higher than parent (I struggled to understand it)
If you anytime feel unsure you could do this:
int3
push 0
int3
int3 is a brekpoint trap and you could use info registers in gdb, in this way you can check the difference between rsp and rsp after
That's a OneLake operation, not a Semantic Model operation. So not XMLA endpoint.
There's a REST API for that: https://learn.microsoft.com/en-us/rest/api/fabric/core/onelake-shortcuts
That was indeed help. Below are few ways to check not printable characters in a file
cat -A file.txt
cat -vte file.txt
or
od -c file.txt