For me echo %JAVA_HOME% command was returning back %JAVA_HOME% in my new Windows system. After setting the JAVA_HOME as "C:\Program Files\Eclipse Adoptium\jdk-21.0.4.7-hotspot, I removed following from pom.xml, which fixed the build issue
<properties>
<java.version>21</java.version>
</properties>
io/resource is looking for the file in the class path, not the current directory. It may even be looking for something/file.txt in the class path, since it's a relative path in the something namespace.
You can enable xp_cmdshell
and have a procedure that executes a powershell script using that command. The contents of the script can include anything, and in your case a web request. This doesn't require importing any assemblies, and I find CLR to be overkill for this usercase.
func setupView() {
let eventMessenger = viewModel.getEventMessenger()
let model = viewModel.getEnvironmentModel()
let swiftUIView = CreateHeroSubraceView()
.environmentObject(eventMessenger)
.environmentObject(model)
let hostingController = UIHostingController(rootView: swiftUIView)
hostingController.view.translatesAutoresizingMaskIntoConstraints = false
hostingController.view.backgroundColor = .clear
hostingController.additionalSafeAreaInsets = .zero
addChild(hostingController)
view.addSubview(hostingController.view)
hostingController.view.snp.makeConstraints { make in
make.edges.equalToSuperview()
}
hostingController.didMove(toParent: self)
}
This looks like a bug, but I made it work properly by adding an empty slot.
Fix does not make much sense, but looks like it forces the correct default slot.
<template #thead />
Flutter
Just delete the cache
folder in the bin
folder in the flutter root folder and run flutter doctor -v
and all should be well.
I'm not that experienced, and maybe it's not the safest solution, but have you tried running the query so that it returns an Object[]
instead? It could help avoid the N+1 issue, since e.subEntity
would be loaded in the same query.
If you look at the description of strconv.Itoa
, it tells you:
Itoa is equivalent to [FormatInt](int64(i), 10).
Therefore to avoid any issues with truncation, simply use:
strconv.FormatInt(lastId, 10)
It looks like this was asked in the GitHub issues for Kysely already:
https://github.com/kysely-org/kysely/issues/838
The author essentially recommends the solution I proposed in the question itself which is to wrap it in an object:
private async makeQuery(db: Conn) {
const filter = await getFilterArg(db);
return {
query: db.selectFrom("item").where("item.fkId", "=", filter)
}
}
Here is a pretty simple regex pattern generator. My approach is really simple, just parsing the end user suitable input string like yyyy-MM-dd,HH:mm:ss
or 2025-06-05,08:37:38
and building a new regex pattern by exchanging all chars or digits by \d
or escaping some chars like .
, /
or \
.
The main issue was to correctly handle the specific [A|P]M
pattern, but I think it should be OK. Honestly, it is not super perfect, but fine for getting a clue how it could be done.
Please let me know if you need further explanations about my code and I will add it here tomorrow.
function new-regex-pattern {
param (
[string]$s
)
$ampm = '[A|P]M'
if (($s -match [Regex]::Escape($ampm)) -or ($s -match $ampm)) {
$regexOptions = [Text.RegularExpressions.RegexOptions]'IgnoreCase, CultureInvariant'
if ($s -match [Regex]::Escape($ampm)) {
$pattern = -join ('(?<start>.*)(?<AM_PM>',
[Regex]::Escape($ampm), ')(?<end>.*)')
}
else {
$pattern = -join ('(?<start>.*)(?<AM_PM>', $ampm, ')(?<end>.*)')
}
$regexPattern = [Regex]::new($pattern, $regexOptions)
$match = $regexPattern.Matches($s)
return (convert-pattern $match[0].Groups['start'].Value) +
$match[0].Groups['AM_PM'].Value +
(convert-pattern $match[0].Groups['end'].Value)
}
return convert-pattern $s
}
function convert-pattern {
param (
[string]$s
)
if ($s.Length -gt 0) {
foreach ($c in [char[]]$s) {
switch ($c) {
{ $_ -match '[A-Z0-9]' } { $result += '\d' }
{ $_ -match '\s' } { $result += '\s' }
{ $_ -eq '.' } { $result += '\.' }
{ $_ -eq '/' } { $result += '\/' }
{ $_ -eq '\' } { $result += '\\' }
default { $result += $_ }
}
}
}
return $result
}
$formatinput1 = 'M/d/yyyy,HH:mm:ss.fff'
$formatinput2 = 'yyyy-MM-dd,HH:mm:ss'
$formatinput3 = 'yyyy-M-d h:mm:ss [A|P]M'
$sampleinput1 = '6/5/2025,08:37:38.058'
$sampleinput2 = '2025-06-05,08:37:38'
$sampleinput3 = '2025-6-5 8:37:38 AM'
$example1 = '6/5/2025,08:37:38.058,1.0527,-39.5013,38.072,1.0527,-39.5013'
$example2 = '2025-06-05,08:37:38,1.0527,-39.5013,38.072,1.0527,-39.5013'
$example3 = '2025-6-5 8:37:38 AM,1.0527,-39.5013,38.072,1.0527,-39.5013'
$regexPattern = [Regex]::new((new-regex-pattern $formatinput1))
Write-Host $regexPattern.Matches($example1)
$regexPattern = [Regex]::new((new-regex-pattern $formatinput2))
Write-Host $regexPattern.Matches($example2)
$regexPattern = [Regex]::new((new-regex-pattern $formatinput3))
Write-Host $regexPattern.Matches($example3)
$regexPattern = [Regex]::new((new-regex-pattern $sampleinput1))
Write-Host $regexPattern.Matches($example1)
$regexPattern = [Regex]::new((new-regex-pattern $sampleinput2))
Write-Host $regexPattern.Matches($example2)
$regexPattern = [Regex]::new((new-regex-pattern $sampleinput3))
Write-Host $regexPattern.Matches($example3)
https://drive.google.com/file/d/1TGQUtIpuH0FPuXT640OMuJ9jG8YpUbq0/view?usp=drivesdk
Both file are under license ownership of Chandler Ayotte this is a portion of a work in progress. Anyone who loves physics will love this. The volumetric addition of qbits is lacking knowable information that when applied will provide a different perspective. There is an upper boundary completely controlled from surface area
Do you have a custom process? Also under Processing, click on your process. See on the Right pane. Check your Editable region and also your server side condition, make sure you select the right option
If you are using the Universal Render Pipeline, a setting that can produce this issue is the Layer your GameObject is set to could be filtered out in the Filtering property of the default Universal Renderer Data.
The Scene View uses the default Universal Renderer Data set in the URP Asset's Renderer List for it's Renderer settings.
In your URP Asset, double click the first Universal Renderer Data asset in the Renderer List to open it in the Inspector.
Under Filtering, check the Opaque Layer Mask and the Transparent Layer Mask to ensure the Layer your GameObject that is not rendering is checked on, or set the filter to Everything.
See the Unity Manual - Universal Renderer asset reference for URP page for more details on the Filtering property.
Better than disabling the checker completely, if you don’t want to add “U” to your supposedly unsigned literals, is to disable just that case of the checker, with - key: hicpp-signed-bitwise.IgnorePositiveIntegerLiterals
in your configuration. (Copied from comment at the request of julaine)
pyfixest author here - you can access the R2 values via the `Feols._R2` attribute. You can find all the attributes for Feols object here: link . Do you have a suggestion on how we could improve the documentation and make these things easier to find?
interesting topic. how do you modify the add button at point 2?
You can use:
pd.options.display.html.use_mathjax = False
See this for more information:
https://jonathansoma.com/everything/python/dollar-signs-italics-jupyter/
You can disable MSVC compatible mode by passing -fno-ms-compatibility
option:
clang-cl.exe -fno-ms-compatibility main.cpp
If you installed java after installing IntelliJ, the java should be seperately installed and not affected.
Even if IntelliJ uninstalls java, it should be very easy to reinstall it.
So it turns out this is a bug with Poetry 1.8.0 in relation to Artifactory that was patched in version 1.8.2.
Bug details: https://github.com/python-poetry/poetry/issues/9056
Changelog details for 1.8.2: https://python-poetry.org/history/#182---2024-03-02
I upgraded my poetry version and now the pyproject.toml
configuration I described above works as expected with no issues.
For qiskit detailed explanation is here
https://blog.shivalahare.live/quantum-computing-explained-simply-for-developers/
https://blog.shivalahare.live/getting-started-with-qiskit-a-beginners-guide-to-quantum-programming/
Method: Implicit Chain of Thought via Knowledge Distillation (ICoT-KD)
🎯 Goal:
Train a model to answer complex questions without generating reasoning steps, by learning only the final answer from a teacher model's CoT output.
🧠 Core Approach:
Teacher Model (e.g., GPT-3.5):
Generates full reasoning (CoT) + final answer
5 × 8 = 40 → 40 − 12 = 28 → Answer: 28
Student Model (e.g., T5, GPT-J):
Sees only the question → learns to predict “28”
✦ CoT is never shown during training or inference
🛠️ Training Steps:
Teacher generates (Question → CoT + Answer)
Extract (Question → Answer)
Train student on final answers only
✨ Enhancements (Optional):
Self-Consistency Voting (across multiple CoT outputs)
Filtering incorrect teacher answers
✅ Key Advantages:
Fast, CoT-free inference
No model changes required
Effective on math/symbolic tasks
Works with medium-sized models
Methodology: Implicit Chain of Thought via Knowledge Distillation (ICoT-KD)
Goal: Train a model to answer complex reasoning questions correctly without generating explicit reasoning steps — by using CoT-labeled answers from a teacher model.
🧠 Core Framework
1. Teacher Model (e.g., GPT-3.5):
Prompted with CoT-style questions to produce step-by-step rationales followed by final answers.
Example output:
“There are 7 days in a week. 7 squared is 49. Answer: 49”
2. Student Model (e.g., T5, GPT-J):
Trained to map the original input question → only the final answer, using the teacher’s output.
CoT steps are not shown to the student at any point.
Training supervised via standard cross-entropy loss on the final answer.
🧪 Optional Enhancements
Self-Consistency Decoding (SCD):
Use majority voting across multiple CoT generations to select the most consistent answer.
Model Filtering:
Student only distills from teacher generations where the answer matches the gold label.
📌 Training Pipeline
Generate (Q, CoT + A) pairs via teacher
Extract (Q, A) pairs
Train student on (Q → A)
No CoT reasoning at inference
✅ Advantages
General-purpose, model-agnostic
Works with medium models (T5-Base, GPT-J)
Requires no architectural changes
Effective on math and symbolic reasoning tasks
Methodology: Stepwise Internalization for Implicit CoT Reasoning
🎯 Goal:
Train language models to internalize reasoning steps — achieving accurate answers without outputting intermediate steps.
⚙️ Key Approach: Stepwise Internalization
Start with Explicit CoT Training:
Train the model on questions with full step-by-step reasoning and final answer.
Gradual Token Removal (Curriculum Learning):
Iteratively remove CoT tokens from inputs.
Fine-tune the model at each stage.
Forces the model to internalize reasoning within hidden states.
Final Stage – Fully Implicit CoT:
The model predicts the answer directly from the question with no visible reasoning steps.
🔁 Training Optimization Techniques:
Removal Smoothing: Adds random offset to CoT token removal to avoid abrupt changes.
Optimizer Reset: Reset training optimizer at each stage to stabilize learning.
📈 Benefits:
Simpler than knowledge distillation-based methods.
No teacher model required.
Model-agnostic and scalable (effective from GPT-2 to Mistral-7B).
Significant speed gains with minimal loss in accuracy.
Methodology: Reasoning in a Continuous Latent Space (Latent CoT)
🎯 Goal:
Train models to reason internally — without generating reasoning steps — by using a latent vector to carry the thought process.
⚙️ Core Architecture
Reasoning Encoder
Takes a question and maps it to a latent vector (a hidden representation of the reasoning process).
Learns to encode “how to think” into a compact form.
Answer Decoder
Uses the latent vector to generate the final answer only.
No reasoning steps are ever output.
🧪 How it’s Trained
Use existing Chain-of-Thought (CoT) traces to guide the encoder.
CoT helps shape the latent space, even though the model never generates the steps.
The training is fully differentiable (end-to-end), allowing the entire system to be optimized smoothly.
✅ Why It’s Powerful
No CoT at inference: reasoning is done silently inside the vector space.
Faster and more compact than explicit CoT methods.
Generalizes well across reasoning tasks.
What is this paper trying to do?
Normally, when a language model solves a hard question (like a math problem), we make it write out the steps, like:
"7 × 4 = 28. 28 + 12 = 40. Answer: 40."
This is called Chain of Thought (CoT) — it helps the model think clearly and get better answers.
But writing out all those steps:
Takes more time
Makes the model slower
Isn’t always needed if the model can “think” silently
🎯 So what’s the new idea?
Instead of making the model write its thinking, this paper teaches it to do the reasoning silently — inside its “mind”.
Like how humans often do math in their head without saying each step out loud.
They call this Latent CoT — the thinking happens in a hidden, internal form.
🧱 How does the model do it?
It’s like building a machine with two parts:
1. 🧠 Reasoning Encoder
It reads the question
It creates a special vector (a bunch of numbers) that secretly represents how to solve the problem
Think of this like your brain quietly planning an answer
2. 🗣️ Answer Decoder
It takes that hidden “thought vector” and turns it into the final answer
It doesn’t show any reasoning steps — just the answer
🧪 How do they train it?
At first, they let the model see examples with full CoT steps (like 7×4 = 28 → 28 + 12 = 40). But the model is trained to:
Not repeat those steps
Just use them to shape its internal thinking space
In simple terms:
The model learns from the explanations, but doesn’t copy them — it learns to reason silently.
And because the whole system is trained all together, it learns smoothly and efficiently.
✅ Why is this helpful?
🔕 Faster: No reasoning steps to write out
🧠 Smarter: Reasoning is hidden, but still accurate
📦 Compact: Takes less space and time
🔁 Trainable end-to-end: Easy to improve all parts together
🔬 Good at reasoning tasks like math and logic
🎓 Final Analogy:
Imagine teaching a student to solve problems in their head after showing them many worked-out examples — and they still get the right answers, just silently. That’s exactly what this model is doing.
🔁 Updated Example for Slide
Teacher Model (e.g., GPT-3.5) — Prompted with CoT:
“Alex had 5 packs of markers. Each pack had 8 markers. He gave 12 markers to a friend. How many does he have left?
→ 5 × 8 = 40
→ 40 − 12 = 28
*Answer: 28”
Student Model (e.g., T5, GPT-J) — Trained to see only:
“Alex had 5 packs of markers. Each pack had 8 markers. He gave 12 markers to a friend. How many does he have left?”
→ "28"
✅ This example comes from GSM8K, one of the key datasets used in the paper’s experiments .
Let me know if you’d like this integrated into the concise slide version or included in your experimental framework!
You can take a look at this article (use google to translate) : https://www.drhead.org/articles/quelques-techniques-seo-blackhat-1751392180396 it explains how to optimize SEO. But to answers you, go take a look at the SSG/ISR (by Next.JS) tech, u'll have what you need
The issue was that the production environment variables were not set up in my Expo account. I set them here and the crash resolved.
https://docs.expo.dev/eas/environment-variables/#create-environment-variables
Team member here. VS Code Insiders now has MCP out of preview, with a new policy for enterprises. Docs are being updated for next week's release.
Please file vscode issues for any issues you find for how policies are applied, so we can triage and debug.
Firstly: I was able to fix the CHROME problem on iOS by making sure all the event handlers were NOT async functions. I blame AI for making these async in the first place as the expo-audio APIs do not need to be awaited.
Secondly: To get iOS to work I did the following little trick, when I first (in the event handler) call to play, I did a play, pause, play. I get a warning in there, but for some reason this manages to do some of the "priming" I needed. I do not know why this worked, but a lot of trial and error led me here.
Perhaps at some point I might try to reproduce this issue and fix in a small project to give back to the expo-audio.
You can keep both apps with their own IDP and avoid coupling by using a third IDP as a broker (like another Keycloak instance).
This broker handles login via both app1's IDP and app2’s Keycloak using OIDC.
Basically:
app1 stays as-is
app2 uses Keycloak
the broker gives you SSO between both
This way, each app manages its own users/sessions, and the broker keeps a global session across them.
I wrote a guide on how to set up multiple identity providers in Keycloak if you want to go that route:
https://medium.com/@raf.lucca/one-login-many-sources-oidc-sso-with-multiple-identity-providers-keycloak-08cf3cd13c78
i have changed this and worked. source: https://samcogan.com/assign-azure-privileged-identity-management-roles-using-bicep/
param requestType string = 'AdminAssign'
Nothing has worked for me,
This is me insert: REPLACE(('The 6MP Dome IP Camera's clarity is solid, setup easy. Wide lens captures more area.'), '''', '''''')
It breaks because of the single quote in Camera's, these are dynamic variables
Any suggestions?!
When you encounter an error while pushing to a remote repository because it contains changes you don't have locally, you need to integrate those changes first. Start by fetching the latest updates from the remote and then merge or rebase them into your local branch. If conflicts arise, resolve them manually in your text editor, stage the resolved files, and complete the merge or rebase process. Once your local branch is up-to-date and conflicts are resolved, you can safely push your changes to the remote repository.
The cart behavior in Hydrogen 2 + Remix often relates to how Shopify handles cart sessions and optimistic UI updates. Below is a summary of potential reasons and troubleshooting techniques:
Optimistic UI kicks in immediately after CartForm.ACTIONS.LinesAdd, so your cart state temporarily shows the added line and updated quantity.
After the server processes the cart update, your app fetches the real cart state from Shopify.
If your server-side action or Shopify's API returns a cart with totalQuantity: 0, the optimistic state gets replaced by that empty cart, causing prices to show $0 and quantity to flip back.
The cart session is not properly persisted between client and server (Hydrogen relies on cart cookies or session).
Your server-side cart.addLines() call might be using an empty or expired cart ID, causing Shopify to create a new empty cart silently.
The cart context on the server side might be missing or not properly wired, so your cart object in the action is invalid.
( Ensure that the action receives a valid cart from your server-side context )
export async function loader({ request, context }: LoaderFunctionArgs) {
const cart = await getCartFromRequest(request, context);
return json({ cart });
}
Confirm Cart Session is Present
In browser DevTools:
-- Check cookies for cart or similar session identifier
-- Ensure the same cart ID is used between optimistic state and server response
Log Cart ID and Server Cart Object ( Make changes to your action to record what occurs: )
export async function action({ request, context }: ActionFunctionArgs) {
const { cart } = context;
console.log('Cart context on server:', cart);
const formData = await request.formData();
const { action, inputs } = CartForm.getFormInput(formData);
if (action === CartForm.ACTIONS.LinesAdd) {
const result = await cart.addLines(inputs.lines);
console.log('Cart result from server:', result.cart);
return result;
}
}
Search for:
is cart.id
valid?
Is totalQuantity
correct on the server response?
Are prices correct?
useEffect(() => {
refetchCart();
}, [optimisticCart?.totalQuantity]);
(Replace refetchCart with your method for forcing a fresh cart query from server after changes)
Sometimes old cookies or dev server cache cause weird cart behavior. Try:
Clear cookies
Restart dev server
Test in private/incognito window
I hope this is useful! If you need assistance debugging logs or reviewing your server-side context configuration, please let me know.
You can use perfect tools like deepgram, gpt-4-mini(this is the best for speed), elevanlabs.
In that case, you can reduce the delay by 1.2 ~ 1.5s.
Hope you are doing well.
This is a Windows/.NET error, and usually means something is wrong with how Python is being executed. This is is likely a conflict with your system configuration or an integration like OneDrive, PowerShell, or even a corrupted Python installation
It looks like you’ve correctly identified that you can programmatically set the Webhook Endpoint API version.
As to why Stripe is indicating that your default version is 2020-03-02, there is an account-wide default API version that is separate from the fixed API version being used by your SDK and can be upgraded from the Workbench. This account level default is used to determine the default API version used with raw calls and was previously used to set the default API version used by loosely typed language SDKs. Newer SDKs now default to the most recent version at the time of release but the account level default is still required for backward compatibility.
There is a simple answer:
req.remote_ip_address
See https://crowcpp.org/master/reference/structcrow_1_1request.html
in my case worked <span style="font-size:20pt"></span> so you can set margin space bevore or after your element
You can create a bean of the JwtDecoder
with passed a load balanced RestTemplate
:
@Bean
public JwtDecoder jwtDecoder(RestTemplate restTemplate) {
return NimbusJwtDecoder.withIssuerLocation("http://authorization-server").restOperations(restTemplate).build();
}
I just switched to another video adapter (nvidia instead of intel) and added swapchain recreation if it is out of date or suboptimal and it now works
UPDATE:
but it still writing that I should use one semaphore per each swapchain image, dont know why, I am waiting for them anyway
If you go just past the boundaries on your grid it works:
x = linspace(0.9, 2.1, n);
y = linspace(-0.1, 1.1, n);
z = linspace(-0.1, 1.1, n);
The issue here was in fact the endpoint. Instead of https://api.businesscentral.dynamics.com/v2.0/<user-domain-name>/api/Contoso/Punchout/v2.0/companies(<company-id>)/<EntityName>
, it is https://api.businesscentral.dynamics.com/v2.0/<tenant id>/<environment name>/api/<Api publisher>/<Api group>/v2.0/companies(<company ID>)/<EntitySetName>
. If after this you receive a 403 forbidden error, simply generate a permission set within vs code that gives your add-on application full control over its tables/pages/data etc and go into the BC UI and search entra -> microsoft entra application -> select the application you are working on -> under permission sets, add the one you just created in vs code.
Hi, Thanks for the help, I GPT the problem and I got these answers. I implemented and now it works. Here is for reference:
Thank you all anyway
"The default look-controls implementation handles touch input with the onTouchMove function. In version 1.7.0 of A‑Frame, the source shows that onTouchMove only adjusts yaw (horizontal rotation):
onTouchMove: function (evt) {
var direction;
var canvas = this.el.sceneEl.canvas;
var deltaY;
var yawObject = this.yawObject;
if (!this.touchStarted || !this.data.touchEnabled) { return; }
deltaY = 2 * Math.PI * (evt.touches[0].pageX - this.touchStart.x) /
canvas.clientWidth;
direction = this.data.reverseTouchDrag ? 1 : -1;
// Limit touch orientation to yaw (y axis).
yawObject.rotation.y -= deltaY * 0.5 * direction;
this.touchStart = { x: evt.touches[0].pageX, y: evt.touches[0].pageY };
},
Because the onTouchMove handler only updates yawObject.rotation.y, vertical pitch is not affected by dragging. The device’s gyroscope still changes orientation (magicWindowTrackingEnabled defaults to true when the attribute isn’t parsed), so the view moves when you physically tilt the device, but dragging up or down doesn’t modify pitch. To allow pitch rotation from dragging, you would need to customize or extend the look-controls component to apply movement to pitchObject.rotation.x as well.
AFRAME.components["look-controls"].Component.prototype.onTouchMove = function (evt) {
var canvas = this.el.sceneEl.canvas;
if (!this.touchStarted || !this.data.touchEnabled) { return; }
var touch = evt.touches[0];
var deltaX = 2 * Math.PI * (touch.pageX - this.touchStart.x) / canvas.clientWidth;
var deltaY = 2 * Math.PI * (touch.pageY - this.touchStart.y) / canvas.clientHeight;
var direction = this.data.reverseTouchDrag ? 1 : -1;
this.yawObject.rotation.y -= deltaX * 0.5 * direction;
this.pitchObject.rotation.x -= deltaY * 0.5 * direction;
var PI_2 = Math.PI / 2;
this.pitchObject.rotation.x = Math.max(-PI_2, Math.min(PI_2, this.pitchObject.rotation.x));
this.touchStart = { x: touch.pageX, y: touch.pageY };
};
"
i already used 2 instances of MC_MoveAbsolute with MC_BufferMode.MC_Aborting, if you calculate a new Target position or velo, execute the new command..... old Function Block Aborts, new takes over...
There is a widely trusted & useful tool out there to download install multiple software at once.
Checkout video tutorial: https://youtu.be/7HEfbuY3pmg
I have same issue--PhonePe Integration error - Package Signature Mismatched
sending response back: {"result":false,"failureReason":"PACKAGE_SIGNATURE_MISMATCHED"}
Try this.
SELECT DISTINCT M.col1, M.col2, DATE_FORMAT(M.date_col, '%m/%d/%y') as date_col
FROM t1 M
ORDER BY 1, 3;
Found the issue, the code wasnt able to correctly create the pool/job/task due the batch account subscription restrictions. I asked to increase the quota, then I created manually the pool and the job, then I created the task programmatically, it worked fine!
In my case, it was because of my kotlin version. Upgrading the kotlin version to 2.2.0 solved the problem.
To figure out which query is causing it, check the Performance Advisor or the Profiler tab in MongoDB Atlas — both will show you the problematic queries. To fix it, look at the fields you're using in filters and sorting, and create indexes based on that.
I just rename directory ~/.config/Code, then start "code" and rename back. seems like working
When you want them in the latex output then you need to have files without the .svg extension and change the following Doxyfile settings:
LATEX_CMD_NAME = pdflatex -shell-escape
EXTRA_PACKAGES = {svg}
To then generate the pdf you need to have Inkscape installed and in your PATH environment variable as well (for more see: https://tex.stackexchange.com/questions/122871/include-svg-images-with-the-svg-package)
This assumes you have the correct image directory set up (https://doxygen.nl/manual/config.html#cfg_image_path).
Since it's a anonymous volume you can provide --renew-anon-volumes
flag to docker compose as a workaround which will recreate the volume every time you build.
Example:
docker compose -f ../docker-compose-dev.yml --env-file .env.dev up -d --renew-anon-volumes --build
Reference: https://docs.docker.com/reference/cli/docker/compose/up/
Thanks Dennis! You've given me something to digest. I've found a different approach as well. Start the script with:
Start-Transcript -Path “C:\Filepath\File.txt” -Append
a CDI 4+ solution (Weld 5+ for example) is:
@ApplicationScoped
public class Application {
public void onStartup(@Observes @Priority(1) Startup event) {
// ...
}
}
The issue you're facing could be due to different versions of Cython on the two systems. Even though you are using the same package versions, Cython's behavior can change between versions, leading to different output when generating C files.
The error message you're seeing, multiple definitions of '__pyx_CommonTypesMetaclass_get_module', might be caused by Cython generating the same function multiple times due to changes in the code generation process between versions.
To resolve this, I recommend ensuring that both systems are using the same version of Cython. Try using the same Cython version on both systems to see if that resolves the issue.
I've got a reply from MS. MIP SDK does not support the usage of managed identities.
I do not want the main menu/menu bar. I already have that. I want the icon bar or tool bar that goes accross the top. What you said just gives me the project explorer.
I tried the solution proposed by @gboeing, and it worked well. I created a list of nodes that do not belong to bridges, then built a dictionary assigning a tolerance to each of those nodes. Any node that is not included in the list is automatically assigned a tolerance of 0.
I'm sharing the code below in case it's helpful to others.
Thanks!
set_nodes = set(G.nodes)
bridge_nodes = []
for u, v, bridge in G.edges.data("bridge"):
if bridge == 'yes':
bridge_nodes.append(u)
bridge_nodes.append(v)
set_bridge_nodes = set(bridge_nodes)
other_nodes = set_nodes.difference(set_bridge_nodes)
tolerance_other_nodes = {k:30 for k in other_nodes}
intersections = ox.consolidate_intersections(G, rebuild_graph=True, tolerance=tolerance_other_nodes, dead_ends=False)
I think I found some explanations and will answer my own question :
The "declare @p5 unknown" in rpc:completed event was merely a side effect of a timeout. Whenever I set an very high time out, the query will complete (sometimes in 20 minutes) and the rpc:completed event will display "declare @p5 dbo.TypeTableGuid" indeed.
I've been fooled by several different intefering problem in fact : The query was made of a lot of
before a (this is an oversimplified example but the actual query does start with "SELECT TOP(1)")
The number of insert varied from 10000 to 65000 as I realised later by debugging the application code. But I suspect the profiler can only save so many characters per query, which would explain why it never recorded more than
in the trace.
The copy/Paste from the profiler to SSMS was then a useless test : the result was always the one I expected (SELECT TOP(1) ...), but based on a subset of data (6521 guid) instead of 65000, so the performance from SSMS were always good.
I can now work on something a bit more interesting, like why this query goes off the rails from time to time or if there is a better way to formulate it. I'm not sure that that udtt variable holding 65000 guid values is a good option...
What you’re observing is a common issue when combining CSS Grid layouts and shadow DOM, especially regarding how percentage heights are resolved.
The core problem:
In your .container
class, you set height: 100%
. For this to work, all ancestor elements—including your :host
custom element—must have a defined height. However, setting only min-height
on :host
does not establish a definite height; it only sets a lower bound. As a result, .container
(and its grid children) see their parent’s computed height as auto
(i.e., undefined), so height: 100%
collapses to zero.
Why does height: 0
“fix” it?
If you set height: 0
and combine it with min-height
, browsers compute the element’s height as at least the min-height
value. Now, the parent’s height is effectively known, and the grid’s height: 100%
calculation produces the expected result. Without an explicit height
, the reference point for height: 100%
is missing.
Summary:
height: 100%
only works if all ancestor elements up to the root have an explicit height.min-height
alone is not sufficient for descendants using height: 100%
.height: 0
(or height: auto
) in combination with min-height
provides a resolved height for children.Possible solutions:
height
in :host
, e.g., height: 0; min-height: 64px;
height: 100%
in the child grid, and let content or min-height
determine sizing.display: flex
in the host if you want more intuitive height inheritance.It depends on your needs. But most of the time single endpoint is enough.
Making different APIs for mobile and web complicates later. Inconsistent data, race conditions will embrace you.
What, no love for Object.entries()
? This seems to be exactly what you're looking for...
function getDataset(e) {
return new Map(Object.entries(e.dataset));
}
function getDatasetValuesArray(e) {
return Object.entries(e.dataset).map(([, value]) => value);
}
for ( let e of document.querySelectorAll('p') ) {
let datasetValuesArray = getDatasetValuesArray(e);
e.textContent = datasetValuesArray.join(", ");
}
<p data-one="foo">...</p>
<p data-one="bar" data-two="baz">...</p>
I'm sorry. I guess I don't know enough about this. I can't get Macros to be enabled. The VBA project isn't saving. I can't import it.. I'm so confused.
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
If you want to manually solve the captcha you would need some 3rd party service to do it for you (2captcha), or you could make Selenium wait, and manually click the checkbox or do the puzzle yourself, which I assume isn't what you want to do.
I would say you can just use the information given by this answer here, from How can I bypass the Google CAPTCHA with Selenium and Python?. There are simple ways in Selenium to clear browser history, data, and cookies.
I've worked with several teams who’ve run into the same issue: testing at the feature level becomes hard to scale when infra gets expensive and clunky to manage.
My group has addressed this by letting teams spin up test environments on Kubernetes, on demand, and run any kind of test, without committing to heavy pre-provisioned infrastructure. It’s been a practical way for larger engineering orgs to reduce test costs while speeding up delivery.
Let me know if you’d be open to a quick walkthrough or would prefer some examples from others who’ve rolled it out. Overview here: https://www.youtube.com/watch?v=ECY3ebzeCOs
Hi!
I had the exact same issue, and NOTE I have gotten help from GPT to solve this!
because.. no error and no output just doesn't make any sense.
Especially since I have made at least a HelloWorld driver before, so something must have changed(and broke! which is classic with windows!).
Thanks to GPT, (note, in my case, it was this at least) it began working(a .SYS output file got generated!)
NAME with your project name (e.g HelloFromKern)
ETC1 with a UNIQUE GUID
.inf filename
`-----`
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="12.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectGuid>{ETC1}</ProjectGuid>
<RootNamespace>NAME</RootNamespace>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>Driver</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>WindowsKernelModeDriver10.0</PlatformToolset>
<DriverType>KMDF</DriverType>
<DriverTargetPlatform>Universal</DriverTargetPlatform>
<!-- ✅ Fix: force proper .sys output -->
<TargetName>NAME</TargetName>
<TargetExt>.sys</TargetExt>
<TargetPath>$(OutDir)$(TargetName)$(TargetExt)</TargetPath>
<OutputFile>$(OutDir)$(TargetName)$(TargetExt)</OutputFile>
<!-- ✅ Fix: make MSBuild link correctly -->
<LinkCompiled>true</LinkCompiled>
<!-- ✅ Fix: prevent deletion due to signing or packaging -->
<SignMode>None</SignMode>
<GenerateDriverPackage>false</GenerateDriverPackage>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props"
Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')"
Label="LocalAppDataPlatform" />
</ImportGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<AdditionalIncludeDirectories>
C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\km;
C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\shared;
C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um;
C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\wdf\kmdf\1.33
</AdditionalIncludeDirectories>
<PreprocessorDefinitions>_AMD64_;_WIN64;UNICODE;_UNICODE</PreprocessorDefinitions>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<BufferSecurityCheck>false</BufferSecurityCheck>
</ClCompile>
<Link>
<SubSystem>Native</SubSystem>
<EntryPointSymbol>DriverEntry</EntryPointSymbol>
<OutputFile>$(OutDir)$(TargetName)$(TargetExt)</OutputFile>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>
C:\Program Files (x86)\Windows Kits\10\Lib\10.0.19041.0\km\x64
</AdditionalLibraryDirectories>
<AdditionalDependencies>ntoskrnl.lib;hal.lib</AdditionalDependencies>
<IgnoreAllDefaultLibraries>true</IgnoreAllDefaultLibraries>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="Driver.c" />
<Inf Include="NAME.inf" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
</Project>
`-----`
A smaller shoutout to GPT. Saved me here, its so interesting that.. No errors was generated, and no way for GUI (at least in my case, and one comment, I think, even pointed out there was no "linker" option in "projects -> properties" and I didn't see any either) <- which solidifies that this might very much work, if there is no way to link something, It makes sense no error would come out because, all went well, but no linking was done hence no output (note, it was awhile I used visual studio, so Maybe my terminology here is dry, and I'm right now running on 24 hours no sleep writing this so, please don't judge format! If the xml is wrong in some way, do point it out)
Thanks a lot!
Take great care! Cuz these kind of errors is very..Energy consuming indeed!
Jane
One library that can parse GET params is this one.
https://www.npmjs.com/package/express-query-parser
Its source code:
https://github.com/jackypan1989/express-query-parser/tree/master
I saw some Github issues related to this failure :
but none of them was working.
Even if Node 22.13.0 is compatible with Angular 20 (https://angular.dev/reference/versions#actively-supported-versions), the failure was fixed by using a newer version of Node : v24.2.0
OpenGL Uniform Buffers, also known as Uniform Buffer Objects (UBOs), are OpenGL objects that store uniform data (like matrices, vectors, or other shader constants) in GPU memory. They allow you to group related uniform variables into a buffer and share that buffer across multiple shaders, improving performance and code organization.
Found solution myself: Needed to add support for 'macosx' to widget.appex->build settings->supported platforms
https://i.sstatic.net/yrKFBsp0.png
Because Mac Catalyst is not using iOS runtime (as i understood) and cant create widget extension without this tweak.
You're issuing the token from your frontend using http://localhost:8082, but then your backend tries to validate it using http://keycloak:80 as the issuer. These are two different URLs, so the iss (issuer) claim in the token doesn't match what your backend expects.
Keycloak includes the full issuer URL in the token (e.g. http://localhost:8082/realms/myrealm), and the backend expects to verify the token against the same issuer.
If the backend tries to verify against http://keycloak:80/realms/myrealm, the token will be rejected because the issuer doesn't match.
Deletion of "C:\Users\<Your User>\AppData\Roaming\Google\.AndroidStudio4.X" works for me!
Tkx
The issue is likely missing key values for the file part of the form data. The Box API requires the "name" and "filename" keys.
name="file"; filename="myfile.pdf"
This is not mentioned in the API reference but is shown in the following article:
https://developer.box.com/guides/uploads/direct/file/
It is also referred to in the following thread:
Previously, I updated the model by going to the EntityModel page in Visual Studio, but the update didn't take effect. Later, I found out that if you assign a default value to a column in SQL Server, the corresponding column in your model must have its StoreGeneratedPattern
property set to Computed
. Otherwise, even if a default value exists in the database, you'll still get errors like "Cannot insert null" when sending null
from your code, because the default value won't be triggered.
The default theme of a ggplot2 graph has a background color. You can easily and quickly change this to a white background color by using the theme functions, such as theme_bw() , theme_classic() , theme_minimal() or theme_light() (See ggplot2 themes gallery).
The answer of csbrogi is correct.
Just for details this is configuration of ROPC,
Then you can make the request using the client_secret as well.
However, I don't recommend using ROPC in a new project — it's deprecated and tightly coupled to the Identity Provider. The best approach is to use the Authorization Code Grant flow. If it's an external app calling the API, you can also use an offline token.
Thank you so much @matt for taking your time.
The problem is with Assets folder placement, I have placed the folder inside the app's subfolder so it was not tagged properly. When i placed the folder outside it worked. We need to make sure Xcodeproject and the assets folder should be on same page.
I contacted Sonatype support and the solution is to use the right URL where the issue is fixed and sorting by version works. For the example used in my question the correct query URL would be:
As for me disabling "Random mac Address" in the mobile device's wifi settings solved the issue.
Configuring OpenWrt struggled with this issue for some time until I have found this post. It turned out that once the kernel is built one should go to its root directory (where vmlinux file and gdb/scripts directory are located) and execute an additional make command (make scripts_gdb
) to build the gdb modules. It will build the modules and create a symlink to gdb/scripts/vmlinux-gdb.py. Note that I have also built the kernel with CONFIG_GDB_SCRIPTS, which is probably required as well.
Found a workaround
var redisConnectionString = builder.Configuration.GetConnectionString("redis-cache")!;
builder.Services.AddSignalR()
.AddStackExchangeRedis(redisConnectionString, options =>
{
//options.Configuration.
// Optional: set configuration options
options.Configuration.ChannelPrefix = RedisChannel.Literal("MyApp");
options.Configuration.AbortOnConnectFail = false;
});
Also opened an issue at https://github.com/dotnet/aspire/issues/10211
Since I came here looking for a simple "user solution", I post it anyway:
A map to search amenities is https://osmapp.org/ . A list of other possible web maps is here.
# Zapisz to jako np. loop_tts.py, upewnij się, że masz internet by GTT
from gtts import gTTS
from pydub import AudioSegment
# Twój tekst:
text = """3. Bezpośredni pościg za osobą, wobec której:
a) użycie broni palnej było dopuszczalne w przypadkach określonych w pkt. 1 lit.a-d i pkt.2;
b) istnieje uzasadnione podejrzenie, że popełniła przestępstwo, o którym mowa w:
- art. 115 § 20 – przestępstwo o charakterze terrorystycznym lub jego groźba;
- art. 148 – zabójstwo;
- art. 156 § 1 – spowodowanie ciężkiego uszczerbku na zdrowiu;
- art. 163-165 – sprowadzenie zdarzenia, które zagraża życiu lub zdrowiu wielu osób lub mieniu
w wielkich rozmiarach lub spowodowanie bezpośredniego niebezpieczeństwa sprowadzenia takiego zdarzenia: pożar, zawalenie budowli, spowodowanie wybuchu, spowodowanie epidemii, choroby zakaźnej, zatrucia;
- art. 197 – doprowadzenie przemocą, groźbą lub bezprawnie do obcowania płciowego, zgwałcenie, doprowadzenie do poddania się innej czynności seksualnej;
- art. 252 – wzięcie i przetrzymywanie zakładnika lub czynienie przygotowań do takiego czynu;
- art. 280 – kradzież z użyciem przemocy lub groźbą użycia przemocy lub rozbój;
- art. 282 – kto, w celu osiągnięcia korzyści majątkowej, przemocą, groźbą zamachu na życie lub zdrowie albo gwałtownego zamachu na mienie, doprowadza inną osobę do rozporządzenia mieniem własnym lub cudzym albo do zaprzestania działalności gospodarczej;
Ustawy z dnia 6 czerwca 1997 r. – Kodeks karny:
4. Konieczność:
a) ujęcia osoby:
- wobec której użycie broni palnej było dopuszczalne w przypadkach określonych w pkt 1 lit. a-d;
- wobec której istnieje uzasadnione podejrzenie, że popełniła przestępstwo,
o którym mowa w art. 115 § 20, art. 148, art. 156 § 1, art. 163-165, art. 197, art. 252 i art. 280-282 ustawy z dnia 6 czerwca 1997 r. – Kodeks karny;
- dokonującej zamachu, o którym mowa w pkt 1 lit. d;
- jeżeli schroniła się w miejscu trudno dostępnym, a z okoliczności zdarzenia wynika, że może
użyć broni palnej lub innego podobnie niebezpiecznego przedmiotu.
b) ujęcia lub udaremnienia ucieczki osoby zatrzymanej, tymczasowo aresztowanej lub
odbywającej karę pozbawienia wolności, jeżeli:
- ucieczka tej osoby stwarza zagrożenie życia lub zdrowia uprawnionego lub innej osoby,
- istnieje uzasadnione podejrzenie, że osoba ta może użyć materiałów wybuchowych, broni palnej lub innego podobnie niebezpiecznego przedmiotu,
- pozbawienie wolności nastąpiło w związku z uzasadnionym podejrzeniem lub stwierdzeniem popełnienia przestępstwa, o którym mowa w art. 115 § 20, art. 148, art. 156 § 1, art. 163-165, art. 197, art. 252 i art. 280-282 ustawy z dnia 6 czerwca 1997 r. – Kodeks karny.
"""
# Generacja głosu
tts = gTTS(text, lang='pl', slow=False)
tts.save("fragment.mp3")
# Wczytanie i zapętlenie 10x
sound = AudioSegment.from_mp3("fragment.mp3")
looped = sound * 10
looped.export("fragment_10x_looped.mp3", format="mp3")
print("Gotowe! Plik zapisany jako 'fragment_10x_looped.mp3'")
You can use the vite-plugin-inspect plugin to inspect which files vite is spending time processing. Then, let's disuss where the problem lies.
If not have benchmarks, there is no point in discussing performance optimization.
I would recommend using Laravel Media Library by Spatie.
You don't have to think about how to name the file, and you can attach the file to any model and do whatever you want with the file.
By using the FilePond API which the FileUpload
field makes use of, I could accomplish something similar to what I wanted:
FileUpload::make('Files')
->acceptedFileTypes(self::$filetypes)
->disk('local')
->moveFiles()
->multiple(true)
->extraAlpineAttributes([
'x-init' => '$el.addEventListener(\'FilePond:processfileprogress\', (e) => { if(e.detail.file.filename === \'test.csv\') e.detail.file.abortProcessing() })'
])
->afterStateUpdated(function ($state) {
foreach ($state as $idx => $file) {
$filename = $file->getClientOriginalName();
if ($filename == 'test.csv') {
$file->delete();
}
}
})
With this method I cannot make it look red like the upload failed. It only says "aborted" and stays grey. Though the contrast to the successful green with multiple files should be good enough.
And I still need to use the ->afterStateUpdated()
method to handle things server-side and delete the uploaded files.
did you find a fix for the above as I am getting the same error?
Prioritize network diagnosis and optimization between AWS and GCP, including checking AWS network configurations like security group and routing and considering dedicated connections.
Tune your C# Pub/Sub client settings by increasing flow control, optimizing client count for concurrency, 2-4vCPU for 1vCPU, and potentially reducing AckDeadlineSeconds from 600s to a lower value once network stability is achieved.
I am running into the same issue. I do not see the role to assign it from the portal. Added a custom role with an action defined to allow the container creation via java code.
It just blows up with the following exception but there is no clue what is required to get it corrected.
The given request [POST /dbs/<DB_NAME>/colls] cannot be authorized by AAD token in data plane.
Status Code: Forbidden
Tried adding Cosmos DB Operator but it did not work as well. Any idea?
Looking at the source code, you need to call `Set-TransientPrompt`. However, that one's not exported.
PrestaShop uses TCPDF to produce PDFs, but the process is messy. In general, using a PHP library to produce PDFs is sufficient for simple documents, but it's not a good option if you want advanced features like CSS Level 3, grids, flex....
I suggest to use an headless browser like open source gotenberg project or an advanced API like pdfactorix which can create PDFs from data and a separate template.
That’s a great post, thanks for breaking this down.
I’ve seen similar hallucination issues pop up when the RAG pipeline doesn’t enforce proper context control or document-level isolation. If you're building anything in a sensitive or enterprise context, it might be worth looking into tools that provide stronger safeguards around that.
I work at a company called TeraVera, and we just launched a secure AI platform with a zero-trust design and tenant-level data segregation. It was built specifically to prevent things like model hallucinations and unauthorized data blending in RAG applications—especially helpful in regulated industries.
If you're interested, here’s a link to request API access and check out the dev docs: teravera.com/api-access-form/
Main site: teravera.com
Hope that helps!
Minimal requirements with MySQL Workbench 8.0.40 seem to be:
GRANT SELECT ON performance_schema.* TO <username>@<host>
Minify conflicts with ProGuard. Other posts report similar behaviors, just in different areas of code. Depends on the app. Code starts running but can have extremely odd behaviors. One one verson of my app it ran, but the XML parsing was returning bad values. In another, file output streams were being closed during use. Google should detect this.
Tru:
mvn wrapper:wrapper -Dmaven=3.5.4
This should update .mvn/wrapper/maven-wrapper.properties
under your project root, then IDEA would use Maven 3.5.4
instead.
Remember to reload your maven project after this.
Official doc: https://maven.apache.org/wrapper/#using-a-different-version-of-maven
In the database of mariadb or MySQL, all the related tables have to have the same ENGINE, otherwise it will raise `errno: 150 "Foreign key constraint is incorrectly formed"`
Any solution??? I try to do the same here, where I have one product page with differente sizes, and when I clicked in one size, the page change, because all sizes is a different product and the slick always stay in the fisrt and not in the size with class .selected of slick.
Google workflow does not directly accept ternary operators. To implement this you can add a step that uses a switch block.
see more here:
https://cloud.google.com/workflows/docs/reference/syntax/conditions#yaml
- check_some_variable_exists:
switch:
- condition: ${"some_variable" in input}
assign:
- some_variable: ${input.some_variable}
- condition: ${true}
assign:
- some_variable: "default_value"
I've tried using the noted approach. Unfortunately; it only seems to apply the theme to the first dialog that is defined in a LUA script. Subsequent ones seem only to obey any explicitly global attributes, like those for DLGBGCOLOR. Attributes, such as control padding within the "myTheme" appear to be ignored. I've checked to see that the user theme is still specified as the default theme prior to building additional dialogs to the initial one and found that it is.
However; If I set myTheme prior to specifying each dialog, then things work as expected.
I'm starting to wonder if the LUA compilation of IUP might have a bug, but I'm certainly open to suggestions as to how to correctly utilize the THEME capability. Any assistance or examples for getting this to work reliably for multiple dialog LUA scripts would be very much appreciated.
thank you so much for your helpful comments and for pointing me in the right direction.
I'm currently working with the same mobile signature provider and services described in this StackOverflow post, and the endpoints have not changed.
Here's what I'm doing:
I calculate the PDF hash based on the byte range (excluding the /Contents
field as expected).
I then Base64 encode this hash and send it to the remote signing service.
The service returns an XML Signature structure, containing only the signature value and the certificate. It does not re-hash the input — it signs the hash directly.
Based on that signature and certificate, I construct a PKCS#7 (CAdES) container and embed it into the original PDF using signDeferred
.
However, when I open the resulting PDF in Adobe Reader, I still get a “signature is invalid” error.
Additionally, Turkcell also offers a PKCS#7 signing service, but in that case, the returned messageDigest
is only 20 bytes, which doesn’t match the 32-byte SHA-256 digest I computed from my PDF. Because of this inconsistency, I cannot proceed using their PKCS#7 endpoint either.
I’m really stuck at this point and unsure how to proceed. Do you have any advice on:
how to correctly construct the PKCS#7 from a detached XML signature (raw signature + certificate)?
whether I must include signed attributes, or if there's a way to proceed without them?
or any clues why Adobe might mark the signature as invalid even when the structure seems correct?
Any help would be greatly appreciated!