This distinction has irritated me for 30 years, and still trips people up. First off, there is a clear distinction between Authentication (AuthN) and Authorization (AuthZ). AuthN is answering the question of "Who are you?" AuthZ answers the question of "What are you allowed to do?" It is necessary to answer the question of AuthN before approaching the question of AuthZ, because you have to know who the user is before deciding what they can do.
"401 Unauthorized" is supposedly stating that the question of AuthN has not been answered, and "403 Forbidden" answers the AuthZ question negatively. What is confusing is that the text "Unauthorized" is incorrect, and has been for 30+ years. Should be "Not Authenticated". But many apps out there are probably looking for the text (instead of just the code), and would break if they changed it now.
Hopefully this clears up the confusing for anyone looking at the response and thinking, "Is that status right?" It is... and it isn't.
The sql_data is a "SnowflakeQueryResult" type object and not a dataframe object which is why it is not subscriptable when you try to get the column_1 using data['COLUMN_1']
you need to wrap your root component with tui-rot in app.html
E.g.
<tui-root>
<router-outlet></router-outlet>
</tui-root>
The Kafka connect azure blob storage source plugin now works, even if the data was written to the Azure blob storage without using the sink connector plugin. It is now a "generalized" source plugin.
I could read the JSON data from an Azure blob storage account even though the sink plugin was not used to store them into Azure blob storage. All that is needed is the path to the files stored in the blob container.
In my case I needed to make sure equalTo() gets an argument of proper type. Here, it was not String but Long (instead of Long this method expects arg to be a Double, so convert it first).
val id: Long
val query = ref.orderByChild("id").equalTo(id.toDouble())
In other case whole root node was deleted.
As of deleting, as mentioned in other's answers using removeValue().
How to convert this code in python, thanks alot
Please refer to the following discussion.
https://github.com/nextauthjs/next-auth/discussions/11271
In my case, modifying the import as follows solved the problem:
import { signOut } from "next-auth/react";
It seems to be working properly, but I'm very confused. I can't understand why it has to be done this way.
Well the good or bad news is that fillna(method='ffill') doesn't work anymore.
FROM python:3.10
ARG AUTHED_ARTIFACT_REG_URL
COPY ./requirements.txt /requirements.txt
RUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt
Then, run this code to build your Dockerfile:
docker build --build-arg AUTHED_ARTIFACT_REG_URL=https://oauth2accesstoken:$(gcloud auth print-access-token)@url-for-artifact-registry
Check out this link for the full details of his answer.
This help me
I had uikit scrollView and inside it swiftuiView
iOS 16+
hostingController.sizingOptions = [.intrinsicContentSize]
Other
ParentViewController:
public override func viewDidLoad() {
super.viewDidLoad()
...
scrollView.translatesAutoresizingMaskIntoConstraints = false
scrollView.delegate = self
view.addSubview(scrollView)
...
let mainVC = AutoLayoutHostingController(rootView: MainView(viewModel: viewModel))
addChild(mainVC) /// Important
guard let childView = mainVC.view else { return }
childView.backgroundColor = .clear
childView.translatesAutoresizingMaskIntoConstraints = false
scrollView.addSubview(childView)
mainVC.didMove(toParent: self) /// Important
childView.setContentHuggingPriority(.required, for: .vertical)
childView.setContentCompressionResistancePriority(.required, for: .vertical)
NSLayoutConstraint.activate([
....
scrollView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
scrollView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
scrollView.topAnchor.constraint(equalTo: view.topAnchor),
scrollView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
childView.leadingAnchor.constraint(equalTo: scrollView.leadingAnchor, constant: 28),
childView.topAnchor.constraint(equalTo: scrollView.topAnchor, constant: 16),
childView.bottomAnchor.constraint(equalTo: scrollView.bottomAnchor, constant: -20),
childView.widthAnchor.constraint(equalTo: scrollView.widthAnchor, constant: -56),
....
])
}
// MARK: - AutoLayoutHostingController
public final class AutoLayoutHostingController<OriginalContent: View>: UIHostingController<AnyView> {
// MARK: - Initializers
public init(rootView: OriginalContent, onChangeHeight: ((CGFloat) -> Void)? = nil) {
super.init(rootView: AnyView(rootView))
self.rootView = rootView
.background(
SizeObserver { [weak self] height in
onChangeHeight?(height)
self?.view.invalidateIntrinsicContentSize()
}
)
.eraseToAnyView()
}
@available(*, unavailable)
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
Well, I would like to share a new one. XXMLXX https://github.com/luckydu-henry/xxmlxx, which uses C++20 features and a std::vector to store xml tree, also it contains a parsing algorithm using "parser combinator" and "stack" (without recursive), probably can be very high performance, although it not be very "standaard."
there is no adequate response, to this yet, major answers here are using a static filling mode, IOC, FOK, naturally the symbol filling mode is supposed to be the filling mode accepted for that symbol, but that is not the case in every broker. using static filling mode works for just one MT5 instance, but if you consider a case where we have mutiple instances of MT5 where one filling mode, does not work for all brokers, then this becomes an issue.
If you have an empty NoWarn tag <NoWarn></NoWarn> in your .csproj, it will overwrite the Directory.Build.Props settings, and it will show all warnings.
Since the warning comes from library code big chance some dependency relies on a stale pydantic version. Options are to wait an update or to try to install elder pydantic version like pip install 'pydantic<2'.
The easiest way to solve this problem , is using msix package builder from pub.dev. When you do build with this package ,it includes al lnecessary libraries for MSIX build .
A bit late answer, but from what I've read, Informix does not support M.A.R.S. from the .NET Db2 provider (SDK).
The "AAAA..." pattern indicates you're getting null bytes in your buffer. The issue is that ReadAsync(buffer) doesn't guarantee reading the entire stream in one call.
Use CopyToAsync() with a MemoryStream instead:
using var stream = file.OpenReadStream(maxAllowedSize: 10 * 1024 * 1024);
using var memoryStream = new MemoryStream();
await stream.CopyToAsync(memoryStream);
var base64String = Convert.ToBase64String(memoryStream.ToArray());
For a complete solution with security considerations, check out this guide: How to Convert an Image to a Base64 String in Blazor
I have solved the issue using the old school method of restarting my laptop. It had been runinng for 13 days. When I restarted it now the cursor works perfectly.
This culd be usefui, if a string is null or it has spaces at the end;
Example:
string Test = "1, 2, 3, 4, ";
Test = Test.TrimEnd(',');
//Result: "1, 2, 3, 4, ";
Test = (Test ?? "").Trim().TrimEnd(',');
//Result: "1, 2, 3, 4";
Snakemake seems to resolve these paths relative to .snakemake/conda ... so two folders deeper than snakemake's working directory (e.g. configured with `snakemake --directory`)
EUREKA!
file /components/zenoh-pico/include/zenoh-pico/config.h
**MUST ALTER**
```
#define Z_FRAG_MAX_SIZE 4096
#define Z_BATCH_UNICAST_SIZE 2048
#define Z_BATCH_MULTICAST_SIZE 2048
#define Z_CONFIG_SOCKET_TIMEOUT 5000
```
*MOST IMPORTANT* seems to be the line `Z_CONFIG_SOCKET_TIMEOUT 100`from 100 to 5000. Feel free to experiment with lower values (it seems to work with 1000).
Project is uploaded in github: https://github.com/georgevio/ESP32-Zenoh.git
The git commit command is used in Git to record changes to the local repository. It captures a snapshot of the currently staged changes, creating a new "commit" object in the project's history.
On Android 12 and 13, Google has restricted native call recording due to privacy policies. However, you can still record calls using third-party apps that support accessibility services or VoIP-based recording. Many users prefer modified versions of apps like Truecaller, which offer advanced features including call recording without limitations. You can check out a trusted version here: https://truecalrmodapk.com/
update the pybind11 repository and this issue disappears.
ALTER TABLE ttab DROP CONSTRAINT IF EXISTS unq_ttab;
CREATE UNIQUE INDEX unq_ttab_1 ON ttab (partition_num, id);
ALTER TABLE ttab ADD CONSTRAINT unq_ttab UNIQUE (partition_num, id);
There's a note in the "Using API tokens" article, that says:
API tokens used to access Bitbucket APIs or perform Git commands must have scopes.
Creating a scoped token and using it instead of password in PyCharm prompt solved the issue for me.
I got a similar problem, although I was using --onedir option of pyinstaller. In my case the error was due to unicode characters in the directory name. Copying the onnx model to a temp file solved the problem. It works even when the Windows username contains unicode.
So basically, when you run Vulkan, it kinda “takes over” the window. Think of it like Vulkan puts its own TV screen inside your game window and says “okay, I’m in charge of showing stuff here now.”
When you switch to DirectX, you’re telling Vulkan “alright, you can leave now.” Vulkan packs up its things and leaves… but the problem is, it forgets to actually take its TV screen out of the window. So Windows is still showing that last frame Vulkan left behind, like a paused YouTube video.
Meanwhile, DirectX is there, yelling “hey, I’m drawing stuff!” — but Windows ignores it, because it still thinks Vulkan owns the window. That’s why you just see the frozen Vulkan image.
The fix is basically making sure Vulkan really leaves before DirectX moves in. That means:
Wait until Vulkan is 100% done drawing before shutting it down.
Make sure you actually destroy all the stuff Vulkan made for the window (its swapchain, framebuffers, images, etc).
Sometimes you even need to “nudge” Windows to refresh the window (like forcing a redraw), so it stops showing the frozen Vulkan picture.
So in short: Vulkan isn’t secretly still running — it just forgot to give the window back to Windows. DirectX is drawing, but Windows isn’t letting it through until Vulkan fully hands over the keys.
Firebase Craslytcs does not run very easily on maui dotnet9 depending on the project context, many developers can use it with maui dotnet9, however for my context it does not work either, try with Sentry smooth implementation with compatibility, it ran very easily https://docs.sentry.io/platforms/dotnet/guides/maui/
As it turns out, the problem was not NextCloud. Using this tutorial I implemented a working login flow using only the `requests` package. The code for that is below. It is not yet handling performing any kind of API request using the obtained access token beyond the initial authentication, nor is it handling using the refresh token to get a new access token when the old one expired. That is functionality an oauth library is usually handling and this manual implementation is not doing that for now. However it proves the problem isn't with NextCloud.
I stepped through both the initial authlib implementation and the new with a debugger and the request sent to the NextCloud API for getting the access token looks the same in both cases at first glance. There must be something subtly wrong about the request in the authlib case that causes the API to run into an error. I will investigate this further and take this bug up with authlib. This question here is answered and if there is a bug fix in authlib I will edit the answer to mention which version fixes it.
from __future__ import annotations
from pathlib import Path
import io
import uuid
from urllib.parse import urlencode
import requests
from flask import Flask, render_template, jsonify, request, session, url_for, redirect
from flask_session import Session
app = Flask("webapp")
# app.config is set here, specifically settings:
# NEXTCLOUD_CLIENT_ID
# NEXTCLOUD_SECRET
# NEXTCLOUD_API_BASE_URL
# NEXTCLOUD_AUTHORIZE_URL
# NEXTCLOUD_ACCESS_TOKEN_URL
# set session to be managed server-side
Session(app)
@app.route("/", methods=["GET"])
def index():
if "user_id" not in session:
session["user_id"] = "__anonymous__"
session["nextcloud_authorized"] = False
return render_template("index.html", session=session), 200
@app.route("/nextcloud_login", methods=["GET"])
def nextcloud_login():
if "nextcloud_authorized" in session and session["nextcloud_authorized"]:
redirect(url_for("index"))
session['nextcloud_login_state'] = str(uuid.uuid4())
qs = urlencode({
'client_id': app.config['NEXTCLOUD_CLIENT_ID'],
'redirect_uri': url_for('callback_nextcloud', _external=True),
'response_type': 'code',
'scope': "",
'state': session['nextcloud_login_state'],
})
return redirect(app.config['NEXTCLOUD_AUTHORIZE_URL'] + '?' + qs)
@app.route('/callback/nextcloud', methods=["GET"])
def callback_nextcloud():
if "nextcloud_authorized" in session and session["nextcloud_authorized"]:
redirect(url_for("index"))
# if the callback request from NextCloud has an error, we might catch this here, however
# it is not clear how errors are presented in the request for the callback
# if "error" in request.args:
# return jsonify({"error": "NextCloud callback has errors"}), 400
if request.args["state"] != session["nextcloud_login_state"]:
return jsonify({"error": "CSRF warning! Request states do not match."}), 403
if "code" not in request.args or request.args["code"] == "":
return jsonify({"error": "Did not receive valid code in NextCloud callback"}), 400
response = requests.post(
app.config['NEXTCLOUD_ACCESS_TOKEN_URL'],
data={
'client_id': app.config['NEXTCLOUD_CLIENT_ID'],
'client_secret': app.config['NEXTCLOUD_SECRET'],
'code': request.args['code'],
'grant_type': 'authorization_code',
'redirect_uri': url_for('callback_nextcloud', _external=True),
},
headers={'Accept': 'application/json'},
timeout=10
)
if response.status_code != 200:
return jsonify({"error": "Invalid response while fetching access token"}), 400
response_data = response.json()
access_token = response_data.get('access_token')
if not access_token:
return jsonify({"error": "Could not find access token in response"}), 400
refresh_token = response_data.get('refresh_token')
if not refresh_token:
return jsonify({"error": "Could not find refresh token in response"}), 400
session["nextcloud_access_token"] = access_token
session["nextcloud_refresh_token"] = refresh_token
session["nextcloud_authorized"] = True
session["user_id"] = response_data.get("user_id")
return redirect(url_for("index"))
Starting with Android 12 (API 31), splash screens are handled by the SplashScreen API. Flutter Native Splash generates the correct drawable for android:windowSplashScreenAnimatedIcon, but Android caches the splash drawable only after the first run. So, if the generated resource is too large, not in the right format, or not properly referenced in your theme, Android falls back to background color on first launch.
I am not sure if you have resolved this but what you may facing is DynamoDB read consistency issue, I had the similar issue.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html
I am also struggling to set "de" as the keyboard layout on ubuntu core. I am using ubuntu-frame along with chromium kiosk for my ui. In your example, I would recommend building your own snap, which serves as a wrapper script that runs the firefox browser. With the flags daemon set to simple and restart-always set inside your snapcaft.yaml file, it should at least come up again after the user closed it.
As simple as this?
Application.bringToFront;
Works for me (Windows 10)
I've fixed with the following:
Added app.UseStatusCodePagesWithRedirects("/error-page/{0}"); to the Program.cs.
Added the page CustomErrorPage.razor with the following content:
@page "/error-page/{StatusCode:int}"
<div>content</div>
@code {
[Parameter]
public int StatusCode { get; set; }
public bool Is404 => StatusCode == 404;
public string Heading => Is404 ? "Page not found 404" : $"Error {StatusCode}";
}
ElastiCache supports Bloom filters with Valkey 8.1, which is compatible with Redis OSS 7.2. You can see https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/BloomFilters.html for more information.
Olá, se estiver usando algum programa de backup em nuvem desative ele na hora de compilar.
mailto:[email protected],[email protected],[email protected]&cc=...
All other examples did not work for me. This one seems to work.
As of August 2025, Visual Studio 2017 community edition can be downloaded from this link https://aka.ms/vs/15/release/vs_community.exe without login in to a subscription.
Also, the professional version can be downloaded here https://aka.ms/vs/15/release/vs_professional.exe
I got this error with django (AlterUniqueTogether) and mariadb when adding unique_together={('field1', 'field2')} constraint, where field2 was varchar(1000). Size of that field (1000x4) was too big for max index key length of 3072 bytes. field1 was fk so somehow i was getting that error and spent lot of time debugging it.
Other types and usages are:
Built-in Type Usage
Opaque arbitrary user-defined data
kubernetes.io/service-account-token ServiceAccount token
kubernetes.io/dockercfg serialized ~/.dockercfg file
kubernetes.io/dockerconfigjson serialized ~/.docker/config.json file
kubernetes.io/basic-auth credentials for basic authentication
kubernetes.io/ssh-auth credentials for SSH authentication
kubernetes.io/tls data for a TLS client or server
bootstrap.kubernetes.io/token bootstrap token data
Source: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types
I know it's an old question, however have the impression it's still an issue ...
If I understood correctly C23 provides "bigint" types or similar, however not
all users are on C23, not all like it and it's still without I/O support?
I puzzled together a little library, vanilla-C, header only, which provides
I/O from / to binary, octal, decimal and hex strings and some other functions
for gcc builtin 128-bit integer types: Libint128, I/O for 128-bit intger types
I found a cool video about it: https://shre.su/YJ85
It cant also happen due to broken text indexes. Please try invalidating the caches (File > Invalidate caches, Invalidate and Restart).
If Outline does not work
Go to the base file from where you want the path in the terminal or PowerShell, then run this command
tree /A /F > structure.txt
This will generate a txt file named structure.txt in that base location
I had this in my schema.prisma generator client instead of removing it.
output = "../app/generated/prisma"
I change the below like this.
import { PrismaClient } from '@prisma/client'
import { PrismaClient } from "../../generated/prisma";
<security:http name="apis"
pattern="/**"
is also matched?
try commenting out this part...
maybe ..change your request path or exclude these two paths when verifying the token?
<security:http name="employeeSecurityChain"
pattern="/auth/employees/token"
<security:http name="visitorSecurityChain"
pattern="/auth/visitors/token"
<security:http name="apis"
pattern="/api/**"
interface PageProps {
params: Promise<{ id: string }>;
}
const Page = async ({ params }: PageProps) => {
const { id } = await params;
return <div>Page for {id}</div>;
};
export default Page;
does not work when i run build
Not sure about the third party's audit verdict. But imp, you should verify their suggestion. Perhaps its not required at all. The configuration file should contain the runtime and other configuration related values.
However, if you insist deleting these from your config file, you can do so. But keep in mind that -
On .NET 4.0+ apps, it defaults to v4.0 runtime (then rolls forward to whatever 4.x is installed).
On .NET 2.0/3.5 apps, it defaults to v2.0 CLR.
This usually works fine unless your app was explicitly depending on a particular runtime behavior.
You need to listen to autoUpdater events, example:
import { autoUpdater } from 'electron';
...
autoUpdater.on('update-downloaded', (event, releaseNotes, releaseName) => {
const dialogOpts = {
type: 'info',
buttons: ['Restart', 'Later'],
title: 'Application Update',
message: process.platform === 'win32' ? releaseNotes : releaseName,
detail:
'A new version has been downloaded. Restart the application to apply the updates.'
}
dialog.showMessageBox(dialogOpts).then((returnValue) => {
if (returnValue.response === 0) autoUpdater.quitAndInstall()
})
})
More in this tutorial.
You probably should do it outside of the _checkForUpdates function, so you don't attach multiple listeners to one event.
Use https://pub.dev/packages/json_factory_generator
import 'package:flutter/material.dart';
import 'generated/json_factory.dart'; // Contains generated JsonFactory
void main() {
// No initialization needed! 🎉
runApp(const MyApp());
}
// Parse single objects
final user = JsonFactory.fromJson<User>({"id": 1, "name": "Alice"});
// Parse lists with proper typing
final posts = JsonFactory.fromJson<List<Post>>([
{"id": 10, "title": "Hello", "content": "Content"},
{"id": 11, "title": "World", "content": "More content"},
]);
// Type-safe list parsing
final userList = JsonFactory.fromJson<List<User>>([
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"}
]);
You can solve it by putting all of them into different partitions of a single module. e.g., M:A 、 M:B 、M:C。
Connected Redmi Note 14 for testing
Emulator performance lags behind an older Redmi Note 12
Tried changing emulator RAM/graphics, updated SDK tools and USB drivers
No major improvement
Expectation: Better performance on the newer device
Meta information:
androidperformance, android-emulatorDoes this help https://docs.pytorch.org/data/0.7/generated/torchdata.datapipes.iter.ParquetDataFrameLoader.html
Pytorch used to have a torchtext library but it has been deprecated for over a year. You can check it here: https://docs.pytorch.org/text/stable/index.html
Otherwise, your best bet is to subclass one of the base dataset classes https://github.com/pytorch/pytorch/blob/main/torch/utils/data/dataset.py
Here is an example attempt at doing just that https://discuss.pytorch.org/t/efficient-tabular-data-loading-from-parquet-files-in-gcs/160322
My issue was simply that I was using a program to compile all of my docker-compose files into 1. This program only kept the "essential" parts and didn't keep the command: --config /etc/otel/config.yaml part of my otel-collector so the config wasn't being loaded properly into the collector.
I’m facing the same issue, and setting "extends": null didn’t solve it for me either. I created the app using Create React App (CRA). When I run npm run dist, everything builds correctly, but when I execute myapp.exe, I get the error: enter image description here
Can someone help me figure out what’s going wrong?
My package.json is:
{
(...)
"main": "main.js",
(...)
"scripts": {
(...)
"start:electron": "electron .",
"dist": "electron-builder"
}
(...)
"build": {
"extends":null,
"appId": "com.name.app",
"files": [
"build/**/*",
"main.js",
"backend/**/*",
"node_modules/**/*"
],
"directories": {
"buildResources": "public",
"output": "dist"
},
},
"win": {
"icon": "public/iconos/logoAntea.png",
"target": "nsis"
},
"nsis": {
"oneClick": false,
"allowToChangeInstallationDirectory": true,
"perMachine": true,
"createDesktopShortcut": true,
"createStartMenuShortcut": true,
"shortcutName": "Datos Moviles",
"uninstallDisplayName": "Datos Moviles",
"include": "nsis-config.nsh"
}
}
}
I know a lot of time has passed since this problem was discussed, however, I got the same error with WPF today. It turned out that when I set DialogResult twice, I got this error on the second setting. DialogResult did not behave like a storage location that one sets values to multiple time. The error message that results is very misleading. A similar situation was discussed in this chain of answers, however, in my case I was setting DialogResult to "true" both times, to the same value.
Adding to Asclepius's answer here a way to view the commit history up to the common ancestor (including it).
I find this helpful to see what has been going on since the fork.
$ git checkout feature-branch
$ git log HEAD...$(git merge-base --fork-point master)~1
To use the latest stable version, run:
fvm use stable --pin
I found the answer by fiddling around. If anyone is interested:
I had to hover over the link in my Initiator column to retrieve the full stack trace, then right click on zone.js and Add script to ignore list.
Since TailwindCSS generates the expected CSS code perfectly, the issue is that the expected CSS code itself does not work properly. The expected CSS code is correct:
Input
<div class="cursor-pointer">...</div>
Generated CSS (check yourself: Playground)
.cursor-pointer {
cursor: pointer;
}
Since the syntax is correct and other overrides can be ruled out, the only remaining explanation is: a browser bug.
Some external sources mentioning a similar browser bug in Safari:
Answering my own question. Adding the org.freedesktop.DBus.Properties interface to my xml did not work, as the QDbusAbstractorAdaptor or anyone else is already implementing theses methods. But the signal will not be emitted. At least I did not succeed in finding a "official" way.
But I found a workaround which work for me: https://randomguy3.wordpress.com/2010/09/07/the-magic-of-qtdbus-and-the-propertychanged-signal/
My Adaptor parent class is using the setProperty and property functions of QObject.
Overloaded the setProperty function, calling the QOBject ones and as an addition emitted the PropertiesChanged signal manually like this:
QDBusMessage signal = QDBusMessage::createSignal(
"/my/object/path",
"org.freedesktop.DBus.Properties",
"PropertiesChanged");
signal << "my.inter.face";
QVariantMap changedProps;
changedProps.insert(thePropertyName, thePropertyValue);
signal << changedProps;
QStringList invalidatedProps;
signal << invalidatedProps;
QDBusConnection::systemBus().send(signal);
Not a very nice way, but at least the signal is emitted.
Anyway I would be interessted in an more official way of doing it....
Cheers
Thilo
Django has PermissionRequiredMixin which can be derived for each View. PermissionRequired mixin has class property "permission_required". So you can individually define required permission for each view. Also you can tie users to permission groups and assign multiple permissions for each group.
https://docs.djangoproject.com/en/5.2/topics/auth/default/#the-permissionrequiredmixin-mixin
https://docs.djangoproject.com/en/5.2/topics/auth/default/#groups
I found this issue too but this is entirely different level issue and totally my careless mistake
I changed the IP of the machine. Then tried to connect using ssms.
Turns out, i forgot to change the IP too in TCP/IP protocols in SQL Server Network Config, but the locked out login error was really misleading for my case.
Just in case anyone did the same and didnt check. I almost created new admin acc just for that.
sudo apt install nvidia-cuda-dev
The "Test Connection" in the Glue Console only verifies network connectivity, not whether the SSL certificate is trusted during job runtime.
The actual job runtime uses a separate JVM where the certificate must be available and trusted. If AWS Glue can’t validate the server certificate chain during the job run, it throws the PKIX path building failed error.
This typically happens when:
The SAP OData SSL certificate is self-signed or issued by a private CA.
The certificate isn’t properly loaded at runtime for the job to trust it.
✅ What You’ve Done (Good Steps):
You're already trying to add the certificate using:
"JdbcEnforceSsl": "true",
"CustomJdbcCert": "s3://{bucket}/cert/{cert}"
✅ That’s correct — this tells AWS Glue to load a custom certificate.
📌 What to Check / Do Next:
1. Certificate Format
Make sure the certificate is in PEM format (.crt or .pem), not DER or PFX.
2. Certificate Path in S3
Ensure the file exists at the correct path and is publicly readable by the Glue job (via IAM role).
Example:
s3://your-bucket-name/cert/sap_server.crt
3. Permissions
The Glue job role must have permission to read the certificate from S3. Add this to the role policy:
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/cert/*"
}
4. Recheck Key Option Names
Make sure you didn’t misspell any keys like CustomJdbcCert or JdbcEnforceSsl. They are case-sensitive.
5. Glue Version Compatibility
If using Glue 3.0 or earlier, try upgrading to Glue 4.0, which has better support for custom JDBC certificate handling.
6. Restart Job after Changes
After uploading or changing the certificate, restart the job — don’t rely on retries alone.
I had this problem when I had the expected type in a file named alma.d.ts in a folder that also contained a regular alma.ts file. When I renamed the alma.ts file the error went away.
Go to Run -> Edit Configurations -> Additional options -> Check "Emulate terminal in the output console"
KeyStore Explorer (https://keystore-explorer.org/) could be used to extract the private key into a PEM file.
Open the certificate PFX file that contains the public and private key.
Right-click on the entry in KeyStore Explorer and select the Export | Export Private Key
Select OpenSSL from the list
Unselect the Encrypt option and choose the location to save the PEM file.
Did you find a solution yet? I had the same problem and still try to figure out. Which version of spark do you have?
A simple workaround is to transform the geometry column to wkt or wkb and drop the geometry column.
In case of reading you have to tranform it back. Its not nice but functional.
df = df.withColumn("geometry_wkt", expr("ST_AsText(geometry)"))
You could use <iframe>` to load the websites and animate them with css transition or @keyframes
See: https://www.w3schools.com/tags/tag_iframe.asp and https://www.w3schools.com/cssref/css3_pr_transition.php or https://www.w3schools.com/cssref/atrule_keyframes.php
The Places Details API only returns up to 5 reviews for a place. That limit is hard and there is no pagination for the reviews array. The next_page_token you are checking applies to paginated search results, not to reviews in a Place Details response. To fetch all reviews for your own verified business, you must use the Google Business Profile API’s accounts.locations.reviews.list, which supports pagination.
I guess you need to install their Code Coverage plugin too:
https://bitbucket.org/atlassian/bitbucket-code-coverage
https://nextjs.org/docs/app/api-reference/functions/redirect
I'm new to NextJS myself, but maybe something like this could work? Maybe perform the request whenever the request is triggered and await the response and use the function accordingly?
For some file formats like `flac` pydub requires ffmpeg to be installed. And it throws this error when ffmpeg is not found.
Access via window.ZOHO
In the script, where you have used ZOHO will not work directly, as the SDKs will not be supported, to use it! make the zoho library global and use window.ZOHO
In your script, just replace ZOHO with window.ZOHO
In your vs code go to settings, search for javascript.validation and uncheck the checkbox
close and reopen your vs code, if required.
From AWS WEB console -
And the link to create the repository after the latest changes.
close android studio
open via command
open -a "Android Studio"
linux: Pulling from library/hello-world
198f93fd5094: Retrying in 1 second
error pulling image configuration: download failed after attempts=6: dialing docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com:443 container via direct connection because disabled has no HTTPS proxy: connecting to docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com:443: dial tcp: lookup docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com: no such host
I've solved similar problem by editing nginx.conf:
sudo nano /etc/nginx/nginx.conf
then change 'user www-data' to 'user sudo_user' where sudo_user it's your configured sudo user.
Do this simply
input {
field-sizing: content;
text-align: center;
min-width: 25%;
}
from typing import get_origin, get_args
origin = get_origin(klass)
args = get_args(klass)
if origin is list and args:
return _func1(data, args[0])
elif origin is dict and len(args) == 2:
return _func2(data, args[1])
messagebox.showerror(
"Ruta requerida",
"Debes indicar una ruta completa. Usa 'Examinar...' o escribe una ruta absoluta (por ejemplo, C:\\carpeta\\archivo.txt)."
)
return
# Evitar que se indique una carpeta como archivo
if os.path.isdir(archivo_path):
messagebox.showerror(
"Error",
"La ruta indicada es una carpeta. Especifica un archivo (por ejemplo, datos.txt)."
)
return
# Verificar/crear carpeta
try:
dir_path = os.path.dirname(os.path.abspath(archivo_path))
except (OSError, ValueError):
messagebox.showerror("Error", "La ruta del archivo destino no es válida")
return
if dir_path and not os.path.exists(dir_path):
crear = messagebox.askyesno(
"Crear carpeta",
f"La carpeta no existe:\n{dir_path}\n\n¿Deseas crearla?"
)
if crear:
try:
os.makedirs(dir_path, exist_ok=True)
except OSError as e:
messagebox.showerror("Error", f"No se pudo crear la carpeta:\n{e}")
return
else:
return
self._mostrar_progreso_gen()
header = (
"ID|Nombre|Email|Edad|Salario|FechaNacimiento|Activo|Codigo|Telefono|Puntuacion|Categoria|Comentarios\n"
)
with open(archivo_path, 'w', encoding='utf-8') as f:
f.write(header)
tamano_actual = len(header.encode('utf-8'))
rid = 1
while tamano_actual < tamano_objetivo_bytes:
linea = self._generar_registro_aleatorio(rid)
f.write(linea)
tamano_actual += len(linea.encode('utf-8'))
rid += 1
if rid % 1000 == 0:
# Actualización periódica del progreso para no saturar la UI
try:
if self.root.winfo_exists():
progreso = min(100, (tamano_actual / tamano_objetivo_bytes) * 100)
self.progress['value'] = progreso
self.estado_label.config(
text=f"Registros... {rid:,} registros ({progreso:.1f}%)")
self.root.update()
except tk.TclError:
break
tamano_real_bytes = os.path.getsize(archivo_path)
tamano_real_mb = tamano_real_bytes / (1024 * 1024)
try:
if self.root.winfo_exists():
self.progress['value'] = 100
self.estado_label.config(text="¡Archivo generado exitosamente!", fg='#4CAF50')
self.root.update()
except tk.TclError:
pass
abrir = messagebox.askyesno(
"Archivo Generado",
"Archivo creado exitosamente:\n\n"
f"Ruta: {archivo_path}\n"
f"Tamaño objetivo: {tamano_objetivo_mb:,.1f} MB\n"
f"Tamaño real: {tamano_real_mb:.1f} MB\n"
f"Registros generados: {rid-1:,}\n\n"
"¿Deseas abrir la carpeta donde se guardó el archivo?"
)
if abrir:
try:
destino = os.path.abspath(archivo_path)
# Abrir Explorer seleccionando el archivo generado
subprocess.run(['explorer', '/select,', destino], check=True)
except (OSError, subprocess.CalledProcessError) as e:
print(f"No se pudo abrir Explorer: {e}")
try:
if self.root.winfo_exists():
self.root.after(3000, self._ocultar_progreso_gen)
except tk.TclError:
pass
except (IOError, OSError, ValueError) as e:
messagebox.showerror("❌ Error", f"Error al generar el archivo:\n{str(e)}")
try:
if self.root.winfo_exists():
self.estado_label.config(text="❌ Error en la generación", fg='red')
self.root.after(2000, self._ocultar_progreso_gen)
except tk.TclError:
pass
# ------------------------------
# Lógica: División de archivo
# ------------------------------
def _dividir_archivo(self):
"""Divide un archivo en múltiples partes respetando líneas completas.
Reglas y comportamiento:
- El tamaño máximo de cada parte se define en "Tamaño por parte (MB)".
- No corta líneas: si una línea no cabe en la parte actual y ésta ya tiene
contenido, se inicia una nueva parte y se escribe allí la línea completa.
- Los nombres de salida se forman como: <base>_NN<ext> (NN con 2 dígitos).
Manejo de errores:
- Valida ruta de origen, tamaño de parte y tamaño > 0 del archivo.
- Muestra mensajes de error/aviso según corresponda.
"""
try:
src = self.split_source_file.get()
if not src or not os.path.isfile(src):
messagebox.showerror("Error", "Selecciona un archivo origen válido")
return
part_size_mb = self.split_size_mb.get()
if part_size_mb <= 0:
messagebox.showerror("Error", "El tamaño por parte debe ser mayor a 0")
return
part_size_bytes = int(part_size_mb * 1024 * 1024)
total_bytes = os.path.getsize(src)
if total_bytes == 0:
messagebox.showwarning("Aviso", "El archivo está vacío")
return
self._mostrar_progreso_split()
base, ext = os.path.splitext(src)
part_idx = 1
bytes_procesados = 0
bytes_en_parte = 0
out = None
def abrir_nueva_parte(idx: int):
nonlocal out, bytes_en_parte
if out:
out.close()
nombre = f"{base}_{idx:02d}{ext}"
out = open(nombre, 'wb') # escritura binaria
bytes_en_parte = 0
abrir_nueva_parte(part_idx)
line_count = 0
with open(src, 'rb') as fin: # lectura binaria
for linea in fin:
lb = len(linea)
# Si excede y ya escribimos algo, nueva parte
if bytes_en_parte > 0 and bytes_en_parte + lb > part_size_bytes:
part_idx += 1
abrir_nueva_parte(part_idx)
# Escribimos la línea completa
out.write(linea)
bytes_en_parte += lb
bytes_procesados += lb
line_count += 1
# Actualizar progreso cada 1000 líneas
if line_count % 1000 == 0:
try:
if self.root.winfo_exists():
progreso = min(100, (bytes_procesados / total_bytes) * 100)
self.split_progress['value'] = progreso
self.split_estado_label.config(
text=f"Procesando... {line_count:,} líneas ({progreso:.1f}%)")
self.root.update()
except tk.TclError:
break
if out:
out.close()
try:
if self.root.winfo_exists():
self.split_progress['value'] = 100
self.split_estado_label.config(text="¡Archivo dividido exitosamente!", fg='#4CAF50')
self.root.update()
except tk.TclError:
pass
abrir = messagebox.askyesno(
"División completada",
"El archivo se dividió correctamente en partes con sufijos _01, _02, ...\n\n"
f"Origen: {src}\n"
f"Tamaño por parte: {part_size_mb:.1f} MB\n\n"
"¿Deseas abrir la carpeta del archivo origen?"
)
if abrir:
try:
# Si existe la primera parte, seleccionarla; si no, abrir carpeta del origen
base, ext = os.path.splitext(src)
primera_parte = f"{base}_{1:02d}{ext}"
if os.path.exists(primera_parte):
subprocess.run(['explorer', '/select,', os.path.abspath(primera_parte)], check=True)
else:
carpeta = os.path.dirname(src)
subprocess.run(['explorer', carpeta], check=True)
except (OSError, subprocess.CalledProcessError) as e:
print(f"No se pudo abrir Explorer: {e}")
try:
if self.root.winfo_exists():
self.root.after(3000, self._ocultar_progreso_split)
except tk.TclError:
pass
except (IOError, OSError, ValueError) as e:
messagebox.showerror("❌ Error", f"Error al dividir el archivo:\n{str(e)}")
try:
if self.root.winfo_exists():
self.split_estado_label.config(text="❌ Error en la división", fg='red')
self.root.after(2000, self._ocultar_progreso_split)
except tk.TclError:
pass
def main():
"""Punto de entrada de la aplicación.
Crea la ventana raíz, instancia la clase de la UI, centra la ventana y
arranca el loop principal de Tkinter.
"""
root = tk.Tk()
GeneradorArchivo(root)
# Centrar ventana
root.update_idletasks()
width = root.winfo_width()
height = root.winfo_height()
x = (root.winfo_screenwidth() // 2) - (width // 2)
y = (root.winfo_screenheight() // 2) - (height // 2)
root.geometry(f"{width}x{height}+{x}+{y}")
root.mainloop()
if __name__ == "__main__":
main()
https://forum.rclone.org/t/google-drive-service-account-changes-and-rclone/50136 please check this out - new service accounts made after 15 April 2025 will no longer be able to own drive items. Old service accounts will be unaffected.
I know this is an old post, but I am wondering what the state of play now (2025) is for using deck.gl with Vue.js (my specific use case is GeoJson visualisation)?
The suggested project at vue_deckgl still seems alive, but I also noticed another project vue-deckgl-suite.
Are there other alternatives?
Is vue-deckgl-suite the same thing as vue_deckgl, with a slightly different name?
And do the answers to my questions depend on Vue 2 vs Vue 3 compatability?
Using SSR + Pkce flow works. However make sure to have the used cookies whitelisted since i wasted 2 whole days not realizing “Auth season missing” since the cookies didn’t get placed in case you are using a cookie manager 😩
After spending ages trying to get this working where I set .allowsHitTesting(true) and tried to let the SpriteView children manage all interaction and feed it back to the RealityView when needed, I decided it just wasn't possible. RealityKit doesn't really want to play nicely with anything else.
So what I did was create a simple ApplicationModel:
public class ApplicationModel : ObservableObject {
@Published var hudInControl : Bool
init() {
self.hudInControl = false
}
static let shared : ApplicationModel = ApplicationModel()
}
and then in the ContentView do this:
struct ContentView: View {
@Environment(\.mainWindowSize) var mainWindowSize
@StateObject var appModel : ApplicationModel = .shared
var body: some View {
ZStack {
RealityView { content in
// If iOS device that is not the simulator,
// use the spatial tracking camera.
#if os(iOS) && !targetEnvironment(simulator)
content.camera = .spatialTracking
#endif
createGameScene(content)
}.gesture(tapEntityGesture)
// When this app runs on macOS or iOS simulator,
// add camera controls that orbit the origin.
#if os(macOS) || (os(iOS) && targetEnvironment(simulator))
.realityViewCameraControls(.orbit)
#endif
let hudScene = HUDScene(size: mainWindowSize)
SpriteView(scene: hudScene, options: [.allowsTransparency])
// this following line either allows the HUD to receive events (true), or
// the RealityView to receive Gestures. How can we enable both at the same
// time so that SpriteKit SKNodes within the HUD node tree can receive and
// respond to touches as well as letting RealityKit handle gestures when
// the HUD ignores the interaction?
//
.allowsHitTesting(appModel.hudInControl)
}
}
}
this then gives the app some control over whether RealityKit, or SpriteKit get the user interaction events. When the app starts, interaction is through the RealityKit environment by default.
When the user then triggers something that gives control to the 2D environment, appModel.hudInControl is set to true and it just works.
For those situations where I have a HUD based button that I want sensitive to taps when the HUD is not in control, I, in the tapEntityGesture handler, offer the tap to the HUD first, and if the HUD does not consume it, I then use it as needed within the RealityView.
The reason you don’t see the extra artifacts in a regular mvn dependency:tree is because the MUnit Maven plugin downloads additional test-only dependencies dynamically during the code coverage phase, not as part of your project’s declared pom.xml dependencies. The standard dependency:tree goal only resolves dependencies from the project’s dependency graph, so it won’t include those.
mvn dependency:tree -Dscope=test -Dverbose
This will at least show all test-scoped dependencies that Maven resolves from your POM.
mvn dependency:list -DincludeScope=test -DoutputFile=deps.txt
Then run the plugin phase that triggers coverage (munit:coverage-report) in the same build. This way you can compare which artifacts are pulled in.
dependency:go-offlinemvn dependency:go-offline -DincludeScope=test
This forces Maven to download everything needed (including test/coverage). Then inspect the local repository folder (~/.m2/repository) to see what was actually pulled in by the MUnit plugin.
mvn -X test
mvn -X munit:coverage-report
With -X, Maven logs every artifact resolution. You’ll be able to see which additional dependencies the plugin downloads specifically for coverage.
✅ Key Point:
Those extra jars are not “normal” dependencies of your project—they are plugin-managed artifacts that the MUnit Maven plugin itself pulls in. So the only way to see them is either with -X debug logging during plugin execution, or by looking in the local Maven repo after running coverage.
If you want a consolidated dependency tree for test execution including MUnit coverage, run the build with:
mvn clean test munit:coverage-report -X
and parse the “Downloading from …” / “Resolved …” sections in the logs.
Would you like me to write a ready-to-run shell script that extracts just the resolved test dependencies (including MUnit coverage) from the Maven debug output?
how to get data from line x to line y where line x and y identify by name.
Example:
set 1 = MSTUMASTER
3303910000
3303920000
3304030000
3303840000
set 2 = LEDGER
3303950000
I want get data under set 1 as below
3303910000
3303920000
3304030000
3303840000
see my method here, i installed it successuflly in 2025 for visual studio 2022
I locked myself out by the mistaken security setting and had to search for the config file without any hint from the web UI.
Mine (Windows 7) is surprisingly in a different location: C:\Users\<user name>\AppData\Local\Jenkins\.jenkins
I’m trying to figure out a 8 digit number code there are 1 2 3 4 5 6 7 8 9 0 that you can add to it these are the numbers I know 739463
In my case, enabling Fast Deployment fixed this error.
Project>Property>Android>Option>Fast Deployment
Reference: https://github.com/dotnet/maui/issues/29941
I have a solution here, you can use uiautomation to find the browser control and activate it, while starting a thread to invoke the system-level enter button. After uiautomation activates the browser window, it starts to perform carriage enter once a second, and the pop-up window of this browser can be skipped correctly.
React Navigation doesn't use the native tabs instead it uses JS Tabs to mimic the behaviour of the native tabs. If you want liquid glass tabs you need to use react-native-bottom-tabs library to replace React Navigation Tabs with Native Tabs. You then need to do a pod install to do the linking and you should be good to go
The problem is that your function pdf_combiner() never gets called. In your code try/except block is indented the function,so Python just defines the functions and exits without ever executing it.
You can fix it by moving the function call outside and passing the output filename.
Unfortunately, the ASG down scaling is controlled by the TargetTracking AlarmLow Cloudwatch alarm. It needs to see 15 consecutive checks, 1 minute apart before triggering a scale down. It would allow you to edit it since it is controlled by ECS CAS. I am trying to find an environment variable to change it but so far, nothing.
The mentioned ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION and ECS_IMAGE_CLEANUP_INTERVAL don't seem to be related to ASG/EC2 scale down.
int deckSize = deck.Count;
// show the last 5 cards in order
for (int i = 0; i < 5; i++)
{
var drawnPage = deck[deckSize - 1 - i]; // shift by i each time
buttonSlots[i].GetComponent<PageInHandButtonScript>().setPage(drawnPage);
buttonSlots[i].GetComponent<UnityEngine.UI.Image>().sprite = drawnPage.getSprite();
Debug.Log($"Page added to hand: {drawnPage.element} rune");
}
// now remove those 5 cards from the deck
deck.RemoveRange(deckSize - 5, 5);
Debug.Log($"Filled up hand. New Deck size: {deck.Count}");