How many tickets are where ? How can I check ?
There is always one ticket (the service ticket) under ap-req > ticket. It's sent in the clear, but always paired with a one-time authenticator (aka checksum) that proves the client knows the session key.
When delegation is enabled, the second ticket (delegated) is stored within that authenticator, under ap-req > authenticator > cipher > authenticator > cksum > krb-cred.
How many tickers are in request ?
Impossible to tell from the screenshot.
if there are 2: please point me out to them. And how to accept them on server side ?
It should be automatically stored as part of the server's (acceptor's) GSSContext. That seems to be happening here and here.
if there is 1: How should I add one more ticket ?
In HTTP, at least as far as I understand it, the client needs to perform delegation proactively (since only one step is possible for GSSAPI so the server can't request it).
The client's klist needs to show a TGT that is forwardable.
Also, the user principal needs to not have any KDC-side restrictions. For example, Domain Admins on Windows might have the "This account is sensitive and cannot be delegated" flag set on them.
If the HTTP service ticket happens to be cached in klist, then it should show the ok_as_delegate flag, corresponding to "Trust this user for delegation[...]".
Windows and some other clients require that flag (treating it as admin-set policy), other clients ignore that flag and always delegate if configured; e.g. a Java client could use requestDelegPolicy().
The HTTP client needs to be configured to do delegation.
In Firefox, network.negotiate-auth.delegation-uris would be set to https:// for example or to .example.com (or a combination) to make the browser initiate delegation. (Make sure you don't make the 'delegation' list too broad; it should only allow a few specific hosts.)
With curl you would specify curl --negotiate --delegation always (doesn't work for me on Windows, but does work on Linux).
If you were making a custom HTTP client in Java, I think you would call .requestCredDeleg(true) on the GSSContext object before getting a token.
I unzipped and then zipped:
SELECT
boxes.box_id,
ARRAY_AGG(contents.label)
FROM
boxes,
LATERAL FLATTEN(input => boxes.contents) AS item,
contents
WHERE
item.value = contents.content_id
GROUP BY boxes.box_id
ORDER BY boxes.box_id;
i accidently deleted all the files
Deleting the Derived Data solved it for me.
I found the real problem. The phone I use for debugging is Android 11. When I went to install the app, it complained about minimum SDK version. Without thinking, I changed that and didn't notice the newly appearing yellow warning marks on the permissions.
Moving to a phone with Android 12 and building for that fixes everything.
I do really need to target Android 11, so I'll have to set up the coding to support both, but I can do that now that I understand.
pytorch should be installed via pip as conda is not supported. You can follow the instructions here https://pytorch.org/get-started/locally/
For Cuda 11.8 the command is
pip3 install torch --index-url https://download.pytorch.org/whl/cu118
To be sure, you can first uninstall other version
python -m pip uninstall torch
python -m pip cache purge
You can do this easily with Raku/Sparrow:
begin:
regexp: ^^ \d\d "/" \d\d "/" \d\d\d\d
generator: <<RAKU
!raku
say '^^ \s+ "', config()<pkg>, '" \s+';
RAKU
end:
code: <<RAKU
!raku
for matched() -> $line {
say $line
}
RAKU
Then just - s6 —task-run@pkg=openssl.base
Please note it is only available in Snowsight. Since you are unable to view the code collapse feature I am assuming you are logging in directly to classic console.
You just need to navigate to Snowsight from Classic console as mentioned in documentation
It should not be an access issue as even your account is in the first stage of Snowsight upgrade you can still choose between classic console and Snowsight.
I was able to do it like this in Visual Studio 2022:
@{ #region Some region }
<p>Some HTML</p>
<p>Some more HTML</p>
@{ #endregion }
Try invalidating the cache, and when Android Studio opens again, the "code", "split", and "design" buttons should appear. On Mac, you can invalidate the cache by going to: File → Invalidate Caches → Invalidate and Restart.
Got it! changed /n to <br>
"value": "@{items('For_each')?['job_description']} is over due <br>"
Starship supports Enabling Right Prompt. Work for me on MacOS with the zsh shell. I tried add_newline = false but it doesn't work for me. I don't know if they have the option for Left Prompt 😂.
You can go on Kaggle (which is a place where you can find datasets and Machine Learning Models), sign up, and go to the "learn" section. There, you can learn basic Pandas and data visualization. For numpy, https://numpy.org/learn/ has a bunch of resources. Hope this helps!
Fixed it by adding --wait argument to the command in the .gitconfig file (or ./.git/config file for local change). Like:
[diff]
tool = vscode
[difftool "vscode"]
cmd = code --wait --diff $LOCAL $REMOTE
Then running the following command:
git difftool --no-prompt --tool=vscode ProgrammingRust/projs/actix-gcd/src/main.rs
The issue was due to a difference between general tflite implementations and specifically tflm implementations. TfLite will not specify the dimensions of the output tensors prior to the model being invoked and instead relies on dynamically allocating the necessary space when the model is invoked. TFLM does not support dyanmic allocation and instead relies on the predefined dimensions from the metadata of the tflite model to statically allocate. I used netron.app to determine that this metadata was missing. I used the flatbuffer compiler to convert the .tflite file to a .json file where I could see and manipulate the metadata:
.\flatc.exe -t --strict-json --defaults-json -o . schema.fbs -- model2.tflite
I added the missing dimensions to the output tensors and then recompiled from the json back into a .tflite file:
flatc -b --defaults-json -o new_model schema.fbs model2.json
Make sure to have all the proper file paths, I put all of mine in the same folder.
In Rails 6+, you can invalidate a specific fragment using:
Rails.cache.clear(key: "the-key")
this happened to me in Angular 16, and the solution was check the @ng-bootstrap/ng-bootstrap table Dependencies, and use the exactly ng-bootstrap, Bootstrap CSS and Popper versions, for my angular version.
I also faced similar problem after updating Android Studio to Ladybug. My Flutter project was in working condition but after updating Android Studio, started getting this error. After browsing through many answers, below steps solved the issue:
Open the android folder in the flutter project folder into Android Studio and update the Gradle and Android Gradle Plugin to the latest version (You can update using the Update prompt you get when you open the project or manually).
In the android/app/build.gradle file make sure correct JDK version is being used in compileOptions and kotlinOptions blocks.
Make sure the correct Gradle and Android Gradle Plugin versions are used in the build.gradle file.
what does --only-show-errors command do in such case ? Will it be helpful to track only errors ? - https://learn.microsoft.com/en-us/cli/azure/vm/run-command?view=azure-cli-latest#az-vm-run-command-invoke-optional-parameters
Have you given a try ?
It's the apostrophe in one of the labels that did it. I thought the `""' construction in `splitvallabels' could deal with it, but it can't. Will have to change the labels I guess. Also see here.
i know its an old post, but here are steps
download glab
generate a token under gitlab instance you have access to
GITLAB_HOST=https://your-gitlab-host ./glab auth login
GITLAB_HOST=https://your-gitlab-host ./glab repo clone --group group-you-have-access
I believe the URL you are requesting is already cached with the CORS response and you need to invalidate it first.
Your CloudFront configuration shows that you are not caching "OPTIONS" methods, so the preflight calls will be accessing non-cached version of the URLs - allowing the CORS test site to return a successfuly response, since it never executes the actual GET request. However, GET is cached by default, so if you tested this access before setting these header configurations on S3/CloudFront, you would be still getting the cached response.
These are JSON numbers in string format. Based on SQL syntax notation rules. The GoogleSQL documentation commonly uses the following syntax notation rules and one of those rules are: Double quotes ": Syntax wrapped in double quotes ("") is required.
Based on working with JSON data in GoogleSQL, By using the JSON data type, you can load semi-structured JSON into BigQuery without providing a schema for the JSON data upfront. This lets you store and query data that doesn't always adhere to fixed schemas and data types. By ingesting JSON data as a JSON data type, BigQuery can encode and process each JSON field individually. You can then query the values of fields and array elements within the JSON data by using the field access operator, which makes JSON queries intuitive and cost efficient.
if you go on the code window, and click on the line numbers you will see a down arrow, you can click and collapse it .
try to tick this settings on excel
enter image description here
sir did you solve the problem?
You can achieve this by adding .devcontainers/ to a global .gitignore file.
See this answer for more information on how to achieve this.
With this set up, all my dev containers are ignored until they are explicitly tracked in the repo.
KazuCocoa pointed me to the documentation, which makes it clear:
https://appium.github.io/appium-xcuitest-driver/latest/reference/locator-strategies/
name, accessibility id : These locator types are synonyms and internally get transformed into search by element's name attribute for IOS
There are multiple ways how you can achieve this:
The answer can be found at this link, github.com/jetty/jetty.project/issues/12938.
This is what was posted in the link by joakime
"If the work directory contents do not match the WAR it's re-extracted.
If you don't want this behavior, then don't use WAR deployment, use Directory deployment.
Just unpack the WAR into a directory in ${jetty.base}/webapps/<appname>/ and use that as the main deployment. (just don't put the WAR file in ${jetty.base}/webapps/)"
Though, I would've like an option for altering the work directory in emergency scenarios.
Turns out the problem was simply that my python script was not named main, and there was another python app main.py on the same working directory.
For anyone else who may face similar issues in the future:
Please note that the name of your python script should match the uvicorn.run("<filename>:app", ...) part.
Hello and welcome to Los pollos hermanos family My name is Anton Chigurh but you can call me Gus
Trust me, this must work!
import * as Icons from "@mui/icons-material";
const { Home } = Icons;
The TuningModel.RandomSearch ranges documentation specifies "a pair of the form (field, s)".
To specify field samplers:
I had this same issue when I was starting out and all I had done was miss the starting @ off the package name.
So npm -i primeng
this caused the forced github login prompt
But npm -i @primeng
This was what I meant to type and worked as expected but because I was a n00b I didn't notice I'd missed off the @ symbol.
So....
Stepping into the Go SDK's internals after the program finishes is expected in this case. The debugger continues to operate even after your program's code has completed. We are considering a feature to hide these internal debugger steps from the user if the user requests it, so it appears that the debugger stops cleanly after your program finishes. Here is a feature request tracking this improvement: https://youtrack.jetbrains.com/issue/GO-10534
Have you tried adding the top level CompanyAPI to your PYTHONPATH environment variable?
I don't think you should need to use hidden-imports.
How about restricting Access by IP range?
... or use restricted git authentication via PAT policies:
Problem is that PATs are easy to misuse, and I see PATs getting misused a LOT of times.
if you still want to have one container instead of 2/3 check this article. https://medium.com/@boris.haviar/serving-angular-ssr-with-django-8d2ad4e894be
The compression can be specified per column using the USING COMPRESSION syntax:
CREATE TABLE my_table(my_col INTEGER USING COMPRESSION bitpacking);
To make things work we need to use the method replaceText() to replace a text in the document body then using the reverse() method to reverse the text.
Sample Script:
function myFunction() {
const docs = DocumentApp.getActiveDocument();
const body = docs.getBody();
// store your variable names with value here
const [varOne, varTwo, varThree] = ["foo", "bar", "sample"]
// reverse the string by converting the it into array by using split and convert it back to string by using join method
const reverseText = (string) => {return string.split("").reverse().join("")}
// using replaceText to replace a text in the body
body.replaceText("{variableOne}", reverseText(varOne));
body.replaceText("{variableTwo}", reverseText(varTwo));
body.replaceText("{variableThree}", reverseText(varThree));
}
Output:
Note: Image is for visibility only.
Reference:
Hi I have the exact same Problem!!
I use python and marimo in a uv env in my workspace Folder (Win11) . And get the same marimo not loading problem and want to share some Additional information, that could maybe help:
Basically it seems the ports of the marimo server, the marimo VSCode extention and the native VSCode notebook editor do not match up. When I change the port in the marimo VSCode Extention from 2818 to 2819, the marimo server starts on port 2820, but not always, it seems the port difference of 1 between settings and marimo server start is only happening sporadically.
I managed to at one point get all ports to match up, but still had the same issue:
Also restarting my PC, VSCode, it's extentions, or marimo did not work for me.
I have some doubt:
I am getting the Error as:
Argument of type "tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound] | tuple[Any, NDArray[Any] | Unbound] | Any | tuple[Any | Unknown, Unknown, Unknown] | tuple[Any | Unknown, Unknown] | Unknown" cannot be assigned to parameter "x" of type "ConvertibleToFloat" in function "__new__"
Type "tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound] | tuple[Any, NDArray[Any] | Unbound] | Any | tuple[Any | Unknown, Unknown, Unknown] | tuple[Any | Unknown, Unknown] | Unknown" is not assignable to type "ConvertibleToFloat"
Type "tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound]" is not assignable to type "ConvertibleToFloat"
"tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound]" is not assignable to "str"
"tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound]" is incompatible with protocol "Buffer"
"__buffer__" is not present
"tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound]" is incompatible with protocol "SupportsFloat"
"__float__" is not present
"tuple[Any, NDArray[Any] | Unbound, NDArray[Any] | Unbound]" is incompatible with protocol "SupportsIndex"
...
code section is as:
def calculating_similarity_score(self, encoded_img_1, encoded_img_2):
print(f"calling similarity function .. SUCCESS .. ")
print(f"decoding image .. ")
decoded_img_1 = base64.b64decode(encoded_img_1)
decoded_img_2 = base64.b64decode(encoded_img_2)
print(f"decoding image .. SUCCESS ..")
# Read the images
print(f"Image reading ")
img_1 = imageio.imread(decoded_img_1)
img_2 = imageio.imread(decoded_img_2)
print(f"image reading .. SUCCESS .. ")
# Print shapes to diagnose the issue
print(f"img_1 shape = {img_1.shape}")
print(f"img_2 shape = {img_2.shape}")
# ")
# Convert to float
img_1_as = img_as_float(img_1)
img_2_as = img_as_float(img_2)
print(f"converted image into the float ")
print(f"calculating score .. ")
# Calculate SSIM without the full parameter
if len(img_1_as.shape) == 3 and img_1_as.shape[2] == 3:
# For color images, specify the channel_axis
ssim_score = ssim(img_1_as, img_2_as, data_range=img_1_as.max() - img_1_as.min(), channel_axis=2, full=False, gradient=False)
else:
# For grayscale images
ssim_score = ssim(img_1_as, img_2_as, data_range=img_1_as.max() - img_1_as.min())
print(f"calculating image .. SUCCESS .. ")
return ssim_score
so upon returning the value form this function I and adding the operator on it like:
if returned_ssim_score > 0.80: ## then for this line it gives me the above first one error.
but when I am printing this returned value then it is working fine like showing me the v alue as: 0.98745673...
so can you help me with this
The solution is this : add your sso role as IAM or Assumed Role with a wildcard to match all users in that role : AWSReservedSSO_myname_randomstring/* .
The caveat is that the approval rule is not re-evaluated after updating the rule , so you need to delete and recreate the pull request .
Press combine keys Shift + Right click, Save as
@Tiny Wang
You can reproduce it with following code in a form.
@(Html.Kendo().DropDownListFor(c => c.reg)
.Filter(FilterType.Contains)
.OptionLabel("Please select a region...")
.DataTextField("RegName")
.DataValueField("RegID")
.Events( e=>e.Change("onRegionChange"))
.DataSource(source =>
{
source.Read(read =>
{
read.Action("GetRegions", "Location");
});
})
)
@Html.HiddenFor(m => m.LocationId)
@(
Html.Kendo().DropDownListFor(c => Location)
.Filter(FilterType.Contains)
.OptionLabel("Please select an office...")
.DataTextField("OfficeName")
.DataValueField("OfficeId")
.Events(e => e.Change("changeDefLocation"))
.AutoBind(true)
.DataSource(source =>
{
source.Read(read =>
{
read.Action("GetLocations", "Location").Data("additionalInfo");
});
})
)
@(Html.Kendo().MultiSelectFor(m => m.OtherLocation)
.DataTextField("OfficeName")
.DataValueField("OfficeId")
.DataSource(dataSource =>
dataSource.Read(x => x.Action("GetLocationss", "Location").Data("sdaAndLocinfo"))
.ServerFiltering(false)
)
.Events( x=>x.Change("OnOfficeChange"))
.AutoBind(true)
)
Can I upload a document to an Issue in github?
Yes. Search for:
<span data-component="text" class="prc-Button-Label-pTQ3x" > Paste, drop, or click to add files </span>
This shall invoke <input type="file">.
I have a document that I would like to reference from a github issue, but there is not a way to upload it. Any ideas?
Unfortunately, its Accept header is */*, which means that all file upload type validation occurs server-side.
If you upload an impermissible file type (say, .pak), you shall see:
File type not allowed: .pak
However, this occurs after file upload. To avoid this, GitHub luckily documents its upload restrictions:
We support these files:
PNG (
.png)GIF (
.gif)JPEG (
.jpg,.jpeg)SVG (
.svg)Log files (
.log)Markdown files (
.md)Microsoft Word (
.docx), PowerPoint (.pptx), and Excel (.xlsx) documentsText files (
.txt)Patch files (
.patch)If you use Linux and try to upload a
.patchfile, you will receive an error message. This is a known issue.PDFs (
ZIP (
.zip,.gz,.tgz)Video (
.mp4,.mov,.webm)
The maximum file size is:
- 10MB for images and gifs
- 10MB for videos uploaded to a repository owned by a user or organization on a free plan
- 100MB for videos uploaded to a repository owned by a user or organization on a paid plan
- 100MB for videos
- 25MB for all other files
@Parfait
I am getting error "NoneType' object has no attribute 'text'.
All the "content" nodes (sort to be based upon) has some value in it.
venv being now in standard python library, there is no need to install virtualenv (apart from some very peculiar circumstances):
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
I have the same question: can someone tell me if I can adjust the estimation window? As I understand the package description, all the data available before the event date is used for the estimation.
"estimation.period: If “type” is specified, then estimation.period is calculated for each firm-event in “event.list”, starting from the start of the data span till the start of event period (inclusive)."
That would lead to different length of the estimation depending on the event date. Can I manually change this (e.g estimation window t:-200 until t:-10)?
Removing headers is not actual solution if you actually use the Appname-Swift.h header - in such cases you need to find which additional headers to import in order to fix the issue. The solution for current issue can be
#import <PassKit/PassKit>
Found the solution in this thread: https://stackoverflow.com/a/34397364/1679620
For anyone looking for how read files in tests, You can just import like so:
import DATA from "./data.txt";
To answer your question directly, you are assigning a display name to the original 'Popover' component, not your custom wrapper. I don't know why you would have to do that in the first place as that behavior is generally implicit when exporting components, unless you are generating components with a factory function.
Perhaps related but still breaking code, you have your state defined outside the component which is a no-no. I would try and moving the state inside the component wrapper.
I can't think of a compelling reason to re-export 'Popover' as this should be accessible straight from the package.
I was able to resolve similar problem on Oracle Linux 8 with SELinux enabled like this:
sudo yum install policycoreutils-python-utils
sudo setsebool -P use_nfs_home_dirs 1
sudo semanage fcontext -a -t nfs_t "/nethome(/.\*)?"
sudo restorecon -R -v /nethome
NFS share here is /nethome
Thank you Jon. That was very helpful.
public void CheckApi(string apiName, ref Int64 apiVersion)
{
Int64 v1 = FileVersionInfo.GetVersionInfo(apiName).FileMajorPart;
Int64 v2 = FileVersionInfo.GetVersionInfo(apiName).FileMinorPart;
Int64 v3 = FileVersionInfo.GetVersionInfo(apiName).FileBuildPart;
Int64 v4 = FileVersionInfo.GetVersionInfo(apiName).FilePrivatePart;
apiVersion = (v1 << 48) | (v2 << 32) | (v3 << 16) | v4;
}
This returns the File Version which for my purposes will always be the same as the Product Version. For anyone who really needs the Product Version there are also four properties to get that info ProductMajorPart, ProductMinorPart, ProductBuildPart, and ProductPrivatePart.
So I found the answer ... I saved the copied pages to an array and then had to add the image to each copied page
foreach (var page in copiedPages)
{
page.Canvas.DrawImage(results.Item2, results.Item1.Left, results.Item1.Top, results.Item1.Width, results.Item1.Height);
}
I am not getting any dark line .I think you must have put on a border or something
In case someone is using compose profiles, this may happen when you start services with profiles but forget to stop services with them.
In short:
COMPOSE_PROFILES=background-jobs docker compose up -d
COMPOSE_PROFILES=background-jobs docker compose down
No, TwinCAT’s FindAndReplace function does not operate directly on an in-place string. Instead, it returns a modified copy of the input string with the specified replacements applied.
Here is a dirty way to remove O( ) : add ".subs(t**6,0)" to your solution
Reposting it as an answer: I found a solution for my problem in this question, I can specify an explicitly defined schema when reading the json data from an rdd into a DataFrame:
json_df: DataFrame = spark.read.schema(schema).json(json_rdd)
It seems however that I'm reading the data twice now:
df_1_0_0 = _read_specific_version(json_rdd, '1.0.0', schema_1_0_0)
df_1_1_0 = _read_specific_version(json_rdd, '1.1.0', schema_1_1_0)
def _read_specific_version(json_rdd, version, schema):
json_df: DataFrame = spark.read.schema(schema).json(json_rdd)
return json_df.filter(col('version') == version)
Is there a more efficient way to do this? Like, is this exploiting parallel execution, or do I enforce sequential execution here? Maybe a spark newbie question.
Something I learned today:
You can just paste the text into your spreadsheet. Go to Data > Split text to columns > select your formatting option.
That's super helpful and solved my issue!
Here's the updated code to identify whether the don't keep activities flag is turned on or not.
val dka = Settings.System.getInt(contentResolver,Global.ALWAYS_FINISH_ACTIVITIES)
Log.i(TAG,"dka -> $dka")
I got the same problem but I solved it by using the following instruction:
Solved it. Only the first cell can have text. So, remove all text (and runs) from subsequent cells in the merge.
Something like:
tc = table.Elements<TableCell>();
foreach (Paragraph pg in tc.Elements<Paragraph>())
pg.RemoveAllChildren();
gmp-placeselect changed from gmp-placeselect to gmp-select.
I got this error after installed the @hot-loader/react-dom
So I finally figured it out! It happens to be that when reaching any of this site's url, in case you don't have their cookies, it attempts to set cookies and then redirect to the same url. Browsers (and postman) do handle this interaction with something called a cookie jar, if I am correct, but node-fetch and others seem to not to, so they need custom agent which implements this, I used this one and it has good examples npmjs.com/package/http-cookie-agent
Open Azure AI Foundry > Deployments > + Deploy Model
Did this get resolved? Facing the same error.
The issue was solved after instaling Microsoft redistributable package. download from here https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170
check this issue for reference
This solved it for me:
create a new form, go to MyProject change the application framework to something else, then change it back, then select the new form..
In ASP.NET web applications, I have found the null-forgiving operator to be useful for defining navigation properties, as in the following example.
Suppose we have two related tables, Course and Teacher.
In the definition of the Course class, we could have something like this:
public int TeacherID { get; set; }
public Teacher Teacher { get; set; } = null!;
The assignment in the second line, using the null-forgiving operator, allows Course.Teacher to be null in C# code, but not in the database, and that can be very useful.
Is there a better way to achieve the same effect?
you join tables with wrong columns (you need to use GenreID, not Name)
SELECT
Track.Name AS TrackName,
Track.GenreId,
Track.Composer,
Genre.Name AS GenreName
FROM Track
INNER JOIN Genre ON Track.GenreId = Genre.GenreId
WHERE
Track.GenreId IN (1, 3, 4, 5, 23, 9)
ORDER BY Track.Name;
How is LIBRARY_SOURCE defined? In the Doxygen configure file, you can set it via PREDEFINED:
PREDEFINED = LIBRARY_SOURCE
You can also check whether the documentation shows up if preprocessing is disabled (enabled by default):
ENABLE_PREPROCESSING = NO
It appears that the PATH system variable is too long.
I suggest you manually system environment variables and correct anything that looks abnormal (e.g., repetitions).
The sitemap looks accessible and valid, but Google may reject it if the server isn't returning the correct Content-Type (should be application/xml). Also check for redirects, HTTP/HTTPS inconsistencies, or robots.txt blocks. Sometimes Search Console delays processing — try again after 24–48 hours.
Was using Bootstrap 5 and jQuery 1.12 in my project. Upgrading jQuery version from 1.12 to 3.7 fixed my issue.
I hope the issue has been fixed by now 😂, but for any developer looking up on this issue, please remember to call glGetError() in a loop and make sure it is clear, as the function returns one error at a time, while functions can output more than one.
It is hard to deduce what's going on since OpenGL is highly contextual, more code is needed regarding the current state of things in the GL context.
I do strongly recommend to use a wrapper function for glGetError, or even move to glDebugMessageCallback.
static void GLClearErrors() {
while (glGetError() != GL_NO_ERROR);
}
static void GLCheckErrors() {
while (GLenum error = glGetError()) {
std::cout << "[OpenGL error] " << error << std::endl;
}
}
it is the little functions like these that ensure safety in your code, simple yet darn useful.
for .Net 8 you should add package 'Microsoft.Extensions.Hosting.WindowsServices' and add UseWindowsService(). It's activate only if it detects the process is running as a Windows Service.
IHost host = Host.CreateDefaultBuilder(args)
.UseWindowsService()
....
.Build();
await host.RunAsync();
I found a solution, how to make proper IN clauses in case somebody needs to search based on multiple values in a field that is from a PosgreSQL type as the John Williams's solution works BUT on Varchar field
return jdbcClient.sql("""
SELECT *
FROM configuration
WHERE status IN (:status)
""")
.param("status", request.configurationStatus().stream().map(Enum::name).collect(Collectors.toList()), OTHER)
.query((rs, rowNum) -> parseConfiguration(rs))
.list();
The key thing is that the third parameter should be used, which defines the SQL type
In my case I used OTHER (Call type can be seen from java.sql.Types)

since you're the package author maybe you can tell me if I can adjust the estimation window. As I understand the package description, all the data available before the event date is used for the estimation.
"estimation.period: If “type” is specified, then estimation.period is calculated for each firm-event in “event.list”, starting from the start of the data span till the start of event period (inclusive)."
That would lead to different length of the estimation depending on the event date. Can I manually change this (e.g estimation window t:-200 until t:-10)?
Did you ever find a solution for this? I'm having the same problem, container seems to be running infinitely and I want it to be marked as "success" so the next tasks can move on.
Thanks to siggermannen and Dan Guzman I made it to the following query:
use [OmegaCA_Benchmark]
select
a.database_specification_id,
a.audit_action_id, a.audit_action_name,
a.class, a.class_desc,
a.major_id,
object_schema_name =
CASE
WHEN a.class_desc = 'OBJECT_OR_COLUMN' THEN OBJECT_SCHEMA_NAME(a.major_id)
ELSE NULL
END,
object_name =
CASE
WHEN a.class_desc = 'OBJECT_OR_COLUMN' THEN OBJECT_NAME(a.major_id)
WHEN a.class_desc = 'SCHEMA' THEN SCHEMA_NAME(a.major_id)
WHEN a.class_desc = 'DATABASE' THEN 'OmegaCA_Benchmark'
ELSE NULL
END,
a.minor_id,
a.audited_principal_id, c.name as Principal_Name,
a.audited_result,
a.is_group,
b.name as DB_Aud_Spec_Name,
b.create_date, b.modify_date,
b.audit_guid,
b.is_state_enabled
from sys.database_audit_specification_details a
inner join sys.database_audit_specifications b
on a.database_specification_id = b.database_specification_id
inner join sys.database_principals c
on a.audited_principal_id = c.principal_id
best regards
Altin
import React from 'react'; import { motion } from 'framer-motion'; import { Card } from '@/components/ui/card'; import './styles.css';
const HardixFFIntro = () => { return ( <motion.div initial={{ opacity: 0 }} animate={{ opacity: 1 }} transition={{ duration: 3 }} className="smoke-bg" /> <motion.img src="/mnt/data/file-Gkk3FLHg8Uaa2FJ1CcVHZD" alt="Hardix.FF Logo" initial={{ scale: 0.8, opacity: 0 }} animate={{ scale: 1, opacity: 1 }} transition={{ duration: 2, delay: 1 }} className="logo-img" /> <motion.h1 initial={{ y: 100, opacity: 0 }} animate={{ y
Looking at it, the only thing it triggers me would be the dataloader...
But if it work with the other models, would work with this too.
Can you share your dataloader code?
bro just use border-spacing
table.that-has-your-th {
border-spacing: 9px 9px 9px 8px;
}
Old Post but I had the same issue. We had to install ReqnRoll to replace spekflow and this stopped working for me at one point. I looked everywhere and event reinstalled reqnroll based on recommendations but that still didn't work.
I finally reinstalled: https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/runtime-8.0.14-windows-x64-installer and it started working again.
Hopefully this solution helps someone.
The binlog mode of logproxy should be used with flink-mysql-cdc, which is equivalent to treating observer + logproxy + obproxy as a mysql instance. In this way, the connection information uses jdbc (that is, connecting to obproxy).
Reference to https://nightlies.apache.org/flink/flink-cdc-docs-release-3.1/docs/connectors/flink-sources/mysql-cdc/
It is recommended to use the latest version 3.1.1.
I rcommend https://min.io/docs/minio/linux/reference/minio-mc.html, which is well maintained these days.
The issue seemed to be with the matching clause I was using in the code, which was omitted here in the example as I thought it was not the issue. I was matching using a string id instead of an ObjectId. I thought this was not the issue because it seems these string ids work when querying through various methods.
Hope answer given in below link will help to resolve your issue.
Angular 18, VS Code 1.95.2, after ng serve, hitting F5 starts the browser and spins indefinitely
No this behavior is not specified in any standard it is just how browsers interpretate the pre tag . Although as per HTML standard it the line break should be there but the browsers interpretate it like this.
public static function matchesPatternTrc20(string $address) : bool
{
return boolval(preg_match('/^T[1-9a-zA-Z]{33}$/', $address));
}
So, after some more time/issues, I figured that my phpunit/phpunit bundle version was to old (9.*) and so I updated it to works with dama/doctrine-test-bundle (need phpunit version > 10).
But in the end, I removed dama/doctrine-test-bundle and used hautelook/alice-bundle.
I had to add this code in my /test/bootstrap.php to create the db and the schema.
$appKernel = new Kernel('test', false);
$appKernel->boot();
$application = new Application($appKernel);
$application->setCatchExceptions(false);
$application->setAutoExit(false);
$application->run(new ArrayInput([
'command' => 'doctrine:database:drop',
'--force' => '1',
]));
$application->run(new ArrayInput([
'command' => 'doctrine:database:create',
]));
$application->run(new ArrayInput([
'command' => 'doctrine:schema:create',
]));
$appKernel->shutdown();
And added use ReloadDatabaseTrait; at the beginning of my TestClass.
Microsoft is painfully vague on the details of this but:
Add a role assignment to your key vault in the IAM tab.
Choose Key Vault Certificate User (or whatever role you chose)
For users choose "Users, group, or service principal". In the selection menu search for "microsoft azure app service". This will bring up the built-in service SPN which is needed to bind the certificate in Key Vault (you'll notice its application id is abfa0a7c-a6b6-4736-8310-5855508787cd).
I don't think you even need the user assigned managed identity once this in-built SPN is set up but you can test that.
Used in dotnet new react to launch nodejs app.
Glance over aspnetcore repo, folder \src\Middleware\Spa\SpaProxy\
npm start and adds SpaProxyMiddlewarefrom io import BytesIO
import twain
import tkinter as tk
from tkinter import ttk, messagebox, filedialog
import logging
import PIL.ImageTk
import PIL.Image
import datetime
scanned_image = None
current_settings = {
'scan_mode': 'Color',
'resolution': 300,
'document_size': 'A4',
'document_type': 'Normal',
'auto_crop': False,
'brightness': 0,
'contrast': 0,
'destination': 'File',
'file_format': 'JPEG',
'file_path': ''
}
def check_adf_support(src):
"""Check if the scanner supports ADF and return ADF status"""
try:
# Check if ADF is supported
if src.get_capability(twain.CAP_FEEDERENABLED):
print("ADF is supported by this scanner")
# Check if ADF is loaded with documents
if src.get_capability(twain.CAP_FEEDERLOADED):
print("ADF has documents loaded")
return True
else:
print("ADF is empty")
return False
else:
print("ADF is not supported")
return False
except twain.excTWCC_CAPUNSUPPORTED:
print("ADF capability not supported")
return False
def apply_settings_to_scanner(src):
"""Apply the current settings to the scanner source"""
try:
# Set basic scan parameters
if current_settings['scan_mode'] == 'Color':
src.set_capability(twain.ICAP_PIXELTYPE, twain.TWPT_RGB)
elif current_settings['scan_mode'] == 'Grayscale':
src.set_capability(twain.ICAP_PIXELTYPE, twain.TWPT_GRAY)
else: # Black & White
src.set_capability(twain.ICAP_PIXELTYPE, twain.TWPT_BW)
src.set_capability(twain.ICAP_XRESOLUTION, float(current_settings['resolution']))
src.set_capability(twain.ICAP_YRESOLUTION, float(current_settings['resolution']))
# Set document size (simplified)
if current_settings['document_size'] == 'A4':
src.set_capability(twain.ICAP_SUPPORTEDSIZES, twain.TWSS_A4)
# Set brightness and contrast if supported
src.set_capability(twain.ICAP_BRIGHTNESS, float(current_settings['brightness']))
src.set_capability(twain.ICAP_CONTRAST, float(current_settings['contrast']))
# Set auto crop if supported
if current_settings['auto_crop']:
src.set_capability(twain.ICAP_AUTOMATICBORDERDETECTION, True)
except twain.excTWCC_CAPUNSUPPORTED:
print("Some capabilities are not supported by this scanner")
def process_scanned_image(img):
"""Handle the scanned image (save or display)"""
global scanned_image
# Save to file if destination is set to file
if current_settings['destination'] == 'File' and current_settings['file_path']:
file_ext = current_settings['file_format'].lower()
if file_ext == 'jpeg':
file_ext = 'jpg'
# Add timestamp to filename for ADF scans
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S_%f")
img.save(f"{current_settings['file_path']}_{timestamp}.{file_ext}")
# Display in UI (only the last image for ADF)
width, height = img.size
factor = 600.0 / width
scanned_image = PIL.ImageTk.PhotoImage(img.resize(size=(int(width * factor), int(height * factor))))
image_frame.destroy()
ttk.Label(root, image=scanned_image).pack(side="left", fill="both", expand=1)
def scan():
global scanned_image
with twain.SourceManager(root) as sm:
src = sm.open_source()
if src:
try:
# Check ADF support
adf_supported = check_adf_support(src)
# Apply settings before scanning
apply_settings_to_scanner(src)
if adf_supported:
# Enable ADF mode
src.set_capability(twain.CAP_FEEDERENABLED, True)
src.set_capability(twain.CAP_AUTOFEED, True)
print("Scanning using ADF mode...")
else:
print("Scanning in flatbed mode...")
# Scan loop for ADF (will scan once if flatbed)
while True:
src.request_acquire(show_ui=False, modal_ui=False)
(handle, remaining_count) = src.xfer_image_natively()
if handle is None:
break
bmp_bytes = twain.dib_to_bm_file(handle)
img = PIL.Image.open(BytesIO(bmp_bytes), formats=["bmp"])
process_scanned_image(img)
# Break if no more documents in ADF
if remaining_count == 0:
break
except Exception as e:
messagebox.showerror("Scan Error", f"Error during scanning: {e}")
finally:
src.destroy()
else:
messagebox.showwarning("Warning", "No scanner selected")
def test_adf_support():
"""Test if ADF is supported and show result in messagebox"""
with twain.SourceManager(root) as sm:
src = sm.open_source()
if src:
try:
# Check basic ADF support
try:
has_adf = src.get_capability(twain.CAP_FEEDER)
except:
has_adf = False
# Check more detailed ADF capabilities
capabilities = {
'CAP_FEEDER': has_adf,
'CAP_FEEDERENABLED': False,
'CAP_FEEDERLOADED': False,
'CAP_AUTOFEED': False,
'CAP_FEEDERPREP': False
}
for cap in capabilities.keys():
try:
capabilities[cap] = src.get_capability(getattr(twain, cap))
except:
pass
# Build results message
result_msg = "ADF Test Results:\n\n"
result_msg += f"Basic ADF Support: {'Yes' if capabilities['CAP_FEEDER'] else 'No'}\n"
result_msg += f"ADF Enabled: {'Yes' if capabilities['CAP_FEEDERENABLED'] else 'No'}\n"
result_msg += f"Documents Loaded: {'Yes' if capabilities['CAP_FEEDERLOADED'] else 'No'}\n"
result_msg += f"Auto-feed Available: {'Yes' if capabilities['CAP_AUTOFEED'] else 'No'}\n"
result_msg += f"Needs Preparation: {'Yes' if capabilities['CAP_FEEDERPREP'] else 'No'}\n"
messagebox.showinfo("ADF Test", result_msg)
except Exception as e:
messagebox.showerror("Error", f"Error testing ADF: {e}")
finally:
src.destroy()
else:
messagebox.showwarning("Warning", "No scanner selected")
def browse_file():
filename = filedialog.asksaveasfilename(
defaultextension=f".{current_settings['file_format'].lower()}",
filetypes=[(f"{current_settings['file_format']} files", f"*.{current_settings['file_format'].lower()}")]
)
if filename:
current_settings['file_path'] = filename
file_path_var.set(filename)
def update_setting(setting_name, value):
current_settings[setting_name] = value
if setting_name == 'file_format' and current_settings['file_path']:
# Update file extension if file path exists
base_path = current_settings['file_path'].rsplit('.', 1)[0]
current_settings['file_path'] = base_path
file_path_var.set(base_path)
def create_settings_panel(parent):
# Scan Mode
ttk.Label(parent, text="Scan Mode:").grid(row=0, column=0, sticky='w')
scan_mode = ttk.Combobox(parent, values=['Color', 'Grayscale', 'Black & White'], state='readonly')
scan_mode.set(current_settings['scan_mode'])
scan_mode.grid(row=0, column=1, sticky='ew')
scan_mode.bind('<<ComboboxSelected>>', lambda e: update_setting('scan_mode', scan_mode.get()))
# Resolution
ttk.Label(parent, text="Resolution (DPI):").grid(row=1, column=0, sticky='w')
resolution = ttk.Combobox(parent, values=[75, 150, 300, 600, 1200], state='readonly')
resolution.set(current_settings['resolution'])
resolution.grid(row=1, column=1, sticky='ew')
resolution.bind('<<ComboboxSelected>>', lambda e: update_setting('resolution', int(resolution.get())))
# Document Size
ttk.Label(parent, text="Document Size:").grid(row=2, column=0, sticky='w')
doc_size = ttk.Combobox(parent, values=['A4', 'Letter', 'Legal', 'Auto'], state='readonly')
doc_size.set(current_settings['document_size'])
doc_size.grid(row=2, column=1, sticky='ew')
doc_size.bind('<<ComboboxSelected>>', lambda e: update_setting('document_size', doc_size.get()))
# Document Type
ttk.Label(parent, text="Document Type:").grid(row=3, column=0, sticky='w')
doc_type = ttk.Combobox(parent, values=['Normal', 'Text', 'Photo', 'Magazine'], state='readonly')
doc_type.set(current_settings['document_type'])
doc_type.grid(row=3, column=1, sticky='ew')
doc_type.bind('<<ComboboxSelected>>', lambda e: update_setting('document_type', doc_type.get()))
# Auto Crop
auto_crop = tk.BooleanVar(value=current_settings['auto_crop'])
ttk.Checkbutton(parent, text="Auto Crop", variable=auto_crop,
command=lambda: update_setting('auto_crop', auto_crop.get())).grid(row=4, column=0, columnspan=2, sticky='w')
# Brightness
ttk.Label(parent, text="Brightness:").grid(row=5, column=0, sticky='w')
brightness = ttk.Scale(parent, from_=-100, to=100, value=current_settings['brightness'])
brightness.grid(row=5, column=1, sticky='ew')
brightness.bind('<ButtonRelease-1>', lambda e: update_setting('brightness', brightness.get()))
# Contrast
ttk.Label(parent, text="Contrast:").grid(row=6, column=0, sticky='w')
contrast = ttk.Scale(parent, from_=-100, to=100, value=current_settings['contrast'])
contrast.grid(row=6, column=1, sticky='ew')
contrast.bind('<ButtonRelease-1>', lambda e: update_setting('contrast', contrast.get()))
# Destination
ttk.Label(parent, text="Destination:").grid(row=7, column=0, sticky='w')
dest_frame = ttk.Frame(parent)
dest_frame.grid(row=7, column=1, sticky='ew')
destination = tk.StringVar(value=current_settings['destination'])
ttk.Radiobutton(dest_frame, text="Screen", variable=destination, value="Screen",
command=lambda: update_setting('destination', destination.get())).pack(side='left')
ttk.Radiobutton(dest_frame, text="File", variable=destination, value="File",
command=lambda: update_setting('destination', destination.get())).pack(side='left')
# File Format
ttk.Label(parent, text="File Format:").grid(row=8, column=0, sticky='w')
file_format = ttk.Combobox(parent, values=['JPEG', 'PNG', 'BMP', 'TIFF'], state='readonly')
file_format.set(current_settings['file_format'])
file_format.grid(row=8, column=1, sticky='ew')
file_format.bind('<<ComboboxSelected>>', lambda e: update_setting('file_format', file_format.get()))
# File Path
ttk.Label(parent, text="File Path:").grid(row=9, column=0, sticky='w')
global file_path_var
file_path_var = tk.StringVar(value=current_settings['file_path'])
path_frame = ttk.Frame(parent)
path_frame.grid(row=9, column=1, sticky='ew')
ttk.Entry(path_frame, textvariable=file_path_var).pack(side='left', fill='x', expand=True)
ttk.Button(path_frame, text="Browse...", command=browse_file).pack(side='left')
# Scan Button
ttk.Button(parent, text="Scan", command=scan).grid(row=10, column=0, columnspan=2, pady=10)
# ADF Test Button
ttk.Button(parent, text="Test ADF Support", command=test_adf_support).grid(row=11, column=0, columnspan=2, pady=5)
# Main application setup
logging.basicConfig(level=logging.DEBUG)
root = tk.Tk()
root.title("Scanner Application with ADF Test")
# Main frame
main_frame = ttk.Frame(root, padding=10)
main_frame.pack(fill='both', expand=True)
# Settings panel on the left
settings_frame = ttk.LabelFrame(main_frame, text="Scanner Settings", padding=10)
settings_frame.pack(side='left', fill='y')
# Image display area on the right
image_frame = ttk.Frame(main_frame)
image_frame.pack(side='right', fill='both', expand=True)
create_settings_panel(settings_frame)
root.mainloop()
I have this for only flat bet mode.
But I want ADF mode using python.
Anybody experieced?