have you found a solution in the meantime? I have the same problem with .identityBanner. In the password entry window, the e-mail address is always displayed on a white background. It makes little sense that you can change the background color of the rest.
I found out that I made a mistake, I overwrote the lib folder of the new version I installed with the lib folder of the old version. Now it is showing the correct version.
root@SRVHML:/opt/tomcat/bin# ./version.sh
Using CATALINA_BASE: /opt/tomcat
Using CATALINA_HOME: /opt/tomcat
Using CATALINA_TMPDIR: /opt/tomcat/temp
Using JRE_HOME: /usr/lib/jvm/java-1.8.0-amazon-corretto
Using CLASSPATH: /opt/tomcat/bin/bootstrap.jar:/opt/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:
Server version: Apache Tomcat/9.0.105
Server built: May 7 2025 18:36:02 UTC
Server number: 9.0.105.0
OS Name: Linux
OS Version: 5.4.0-190-generic
Architecture: amd64
JVM Version: 1.8.0_392-b08
JVM Vendor: Amazon.com Inc.
min-h-0
would have less CSS specificity than the default style.
To make it work, you can either use min-h-0!
:
<div class="collapse border border-base-300 bg-base-100 text-xs">
<input type="checkbox" class="min-h-0!" />
<div class="collapse-title min-h-0!">How do I create an account?</div>
<div class="collapse-content">Click the "Sign Up" button in the top right corner and follow the registration process.</div>
</div>
https://play.tailwindcss.com/ODZ4Ga6Hz1
Or use:
.collapse {
> input,
> .collapse-title {
min-height: 0;
}
}
I added IHttpContextAccessor as a constructor argument for my Handler. Using that I can rerun my logic to completion. In fact the handler now gets called 3 frickin times! So I don't think this post (or my code) meets the worthy criteria for SO.
In my Edit 2 I explain how I solved my issue
Neither item_number
nor custom
seem to arrive in the notification, nor do they appear anywhere in your transaction history ... but item_name
does if you add it as a hidden field to your form.
I tried to follow @mariaiffonseca, but I get an error every time when it tries to build the library I get the following error message: Creating Android archive under prebuilt: failed
. Moreover, follow below some log messages:
> Task :ffmpeg-kit-android-lib:compileReleaseJavaWithJavac FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':ffmpeg-kit-android-lib:compileReleaseJavaWithJavac'.
> Could not resolve all files for configuration ':ffmpeg-kit-android-lib:androidJdkImage'.
> Failed to transform core-for-system-modules.jar to match attributes {artifactType=_internal_android_jdk_image, org.gradle.libraryelements=jar, org.gradle.usage=java-runtime}.
> Execution failed for JdkImageTransform: /Users/rca/Library/Android/sdk/platforms/android-33/core-for-system-modules.jar.
> Error while executing process /Users/rca/Library/Java/JavaVirtualMachines/corretto-21.0.5/Contents/Home/bin/jlink with arguments {--module-path /Users/rca/.gradle/caches/transforms-3/ef45e0af4d32a105d29fb530a1beed17/transformed/output/temp/jmod --add-modules java.base --output /Users/rca/.gradle/caches/transforms-3/ef45e0af4d32a105d29fb530a1beed17/transformed/output/jdkImage --disable-plugin system-modules}
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
they might encrypt the data and save it and then use the information every time they want to charge.
Stripe allows you to make a charge from ACH data but not save it as a customers payment method
My issue was that I didn't have a Localization for the Subscriptions Group.
It's a confusing UX from Apple, all subscriptions were saying "Missing Metadata", hinting there must be an issue with them. It wasn't!
As soon as I updated the Subscriptions Group section, my subscriptions turned to "Ready to Submit"
In case you'd like to use an existing solution, you can check out:
https://assetstore.unity.com/packages/tools/terrain/procedural-floating-island-generator-319041
In case you need an "easy way out", there's an asset on the Asset Store that does this for you!
https://assetstore.unity.com/packages/tools/terrain/procedural-floating-island-generator-319041
Meanwhile I also tried to used custom session boto3 like below:
from boto3.session import Session as Boto3Session
from botocore.config import Config
from botocore.httpsession import URLLib3Session
from botocore.session import Session as BotocoreSession
class CustomURLLib3Session(URLLib3Session): # type: ignore[misc]
def __init__(self, config: CloudSecurityWorkerConfigs):
if config.USE_KRAKEN:
log.info(f'proxy: {config.KRAKEN_PROXY}')
cert_key = get_app_certs()
if cert_key:
cert, key = cert_key
log.info(f'cert: {cert}, key: {key}')
super().__init__(
proxies=config.KRAKEN_PROXY,
verify='<ca-bundle>.crt',
proxies_config={
'proxy_ca_bundle': '<ca-bundle>.crt',
'proxy_client_cert': cert_key,
},
)
else:
super().__init__()
botocore_session = BotocoreSession()
botocore_session.register_component('httpsession', CustomURLLib3Session(config))
boto3_session = Boto3Session(botocore_session=botocore_session)
# Optional: set retries or other config options
s3_config = Config(retries={'max_attempts': 6, 'mode': 'standard'})
# Create the S3 client using the patched session
test_aws_client = boto3_session.client(
's3',
aws_access_key_id=config.AWS_ACCESS_KEY_ID,
aws_secret_access_key=config.AWS_ACCESS_SECRET_KEY,
config=s3_config,
)
log.info(f'client created: {test_aws_client}')
paginator = test_aws_client.get_paginator('list_objects_v2')
But I get below error:
2025-05-15 13:21:46,740 cloudsecurityworker.worker [ERROR] Failed to connect to aws: Could not connect to the endpoint URL: "https://<bucket_name>.s3.amazonaws.com/?list-type=2&prefix=dummy%2F&encoding-type=url"
Traceback (most recent call last):
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/httpsession.py", line 464, in send
urllib_response = conn.urlopen(
^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connectionpool.py", line 841, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/util/retry.py", line 449, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/util/util.py", line 39, in reraise
raise value
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connectionpool.py", line 787, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connectionpool.py", line 488, in _make_request
raise new_e
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connectionpool.py", line 464, in _make_request
self._validate_conn(conn)
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connectionpool.py", line 1093, in _validate_conn
conn.connect()
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connection.py", line 704, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/urllib3/connection.py", line 213, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x71393f4fea90>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/cloudsecurityworker/worker.py", line 84, in main
for page in page_iterator:
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/paginate.py", line 269, in __iter__
response = self._make_request(current_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/paginate.py", line 357, in _make_request
return self._method(**current_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/client.py", line 565, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/client.py", line 999, in _make_api_call
http, parsed_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/client.py", line 1023, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/endpoint.py", line 119, in make_request
return self._send_request(request_dict, operation_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/endpoint.py", line 229, in _send_request
raise exception
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/endpoint.py", line 279, in _do_get_response
http_response = self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/endpoint.py", line 375, in _send
return self.http_session.send(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/content/lid/apps/cloud-security-worker/i001/libexec/cloud-security-worker.pyz_121b45119d28139a516068d60967f047fbfa1bb51f837990300dd4a0099e35f2/site-packages/botocore/httpsession.py", line 493, in send
raise EndpointConnectionError(endpoint_url=request.url, error=e)
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://<bucket_name>.s3.amazonaws.com/?list-type=2&prefix=dummy%2F&encoding-type=url"
I am stuck on how to resolve this issue?
This is a simple case of "it's not doing what you think its doing". Powertoys ruler measures how many pixels it physically takes on your screen; AFTER scaling. Scaling settings can be found under the System > Display > Scale & Layout.
You are probably on 150% scaling, hence you should get 48 x 150 / 100 = 72px. On Chrome the ruler will measure 2px less as it does not include the border but on Firefox the border is included.
On 100% scaling you will get the exact size of 48, at least on Firefox.
I found strange behavior, when using namespaces in XML.
I'm trying to change <tps:style type="italic">
into <tps:c type="italic">
.
I found that tag.name = "tps:c"
creates <tps:tps:c type="italic">.
I worked around it by using tag.name = "c"
and it set it to <tps:c type="italic">
.
It looks like @Brett Mchdonald is correct. You have a typo in your post I'd check the spelling of the grid-template-columns
// create a reference to linked stylesheet
const stylesheet = document.styleSheets[0];
const rules = stylesheet.cssRules || stylesheet.rules;
// loop through the style sheet reference to find the classes to be modified and modify them
for (let i = 0; i < rules.length; i++) {
if (rules[i].selectorText === '.grid-container') {
rules[i].style['background-color'] ='yellow';
rules[i].style['grid-template-columns'] = 'auto auto auto';
break;
}
}
Likely you set the trigger to fire off at midnight, which was several hours before you built the flow. Changing the hour when it's supposed to execute the steps does not change the time the flow triggers.
The Regular expression ((<p\s*class="translate"[^>]*>.*?<\/p>)|(<code>.*?</code>))(*SKIP)(*F)|<strong>.*?</strong>
helped find what is between <strong> and </strong> if it met the conditions above
According to the OP in a comment:
using a class based view was triggering a query when I opened the page. I had to create a new page with just the input query then use the query results on a separate page
Yes, if the page is vulnerable to XSS (Cross-Site Scripting), an attacker could run their own script and steal the password stored in the JavaScript variable. Even though it’s not saved in cookies, the password still stays in memory and can be accessed through JavaScript if the attacker injects code into the page. CSRF wouldn’t work here, but XSS could. To stay safe, avoid keeping passwords in variables and always sanitize any data shown on the page.
-- Doesn't work, though, it really should?:
select
count(*),
(select count(*) from dual)
from dual;
No it shouldn't, the query is trying to do a count(*), and selecting a fixed value "(select count(*) from dual)" as if this was a column, so to count(*) you need to group by, as long as (select count(*) from dual) is treated as value, then we should do a group by on this value, the problem that raises here is that it doesn't really exists as a column so you can't refer it on the group by as "group by (select count(*) from dual) as you can't group by subquerys, translated
when you query a table, you can apply group by, the possible correct solution to your issue would be:
select
count(*) b,
a
from dual, (select count(*) a from dual)
group by a;
Regards
As @sergey-fedotov suggested, you need a custom implementation.
Give the code below a try:
use Symfony\Component\Serializer\Normalizer\NormalizerInterface;
class StdClassNormalizer implements NormalizerInterface
{
public function normalize($object, string $format = null, array $context = []): array|\stdClass
{
if ($object instanceof \stdClass && empty(get_object_vars($object))) {
return new \stdClass();
}
return (array) $object;
}
public function supportsNormalization($data, string $format = null, array $context = []): bool
{
return $data instanceof \stdClass;
}
}
Don't forget to register StdClassNormalizer
as a Service.
Uninstalling Chrome will not delete desktop shortcuts to websites unless you manually remove them. It is possible that some Web Apps may be delete with Chrome(Like Spotify Web, Twitter).
For the Error (0X80004005) :
Try running chrome with Admin Rights and then go to chrome://settings/help and try updating again.
Alternative is Using Google's Chrome Cleanup Tool.
Reinstall latest Chrome:
Download from: https://www.google.com/chrome/
You can write the title in Bold for example and use a newline in HTML:
**Some Title**<br />
Without regular expression, but you are turning the logic around to make use of the % wildcard in LIKE. I think this is pretty close to the logic that you had in mind.
SELECT DISTINCT(CITY)
FROM STATION
WHERE 'aeiou' LIKE CONCAT( "%",LEFT(CITY, 1),"%")
0
I'm running into an issue when trying to install dependencies using npm install on my Windows 11 machine. The installation fails with the following error:
npm ERR! code ERR_SSL_CIPHER_OPERATION_FAILED npm ERR! errno ERR_SSL_CIPHER_OPERATION_FAILED npm ERR! Invalid response body while trying to fetch https://registry.npmjs.org/scheduler: A8070000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:c:\ws\deps\openssl\openssl\providers\implementations\ciphers\ciphercommon_gcm.c:320: What I’ve Tried So Far: Cleared npm cache: npm cache clean --force
Tried with legacy peer dependencies: npm install --legacy-peer-deps
Node.js and npm versions:
node -v -> v18.18.2
npm -v -> 9.8.1
Ran terminal as Administrator
Deleted node_modules and package-lock.json and reinstalled
Updated Node.js to the latest LTS
Changed npm registry to HTTP: npm config set registry http://registry.npmjs.org/
Disabled strict SSL: npm config set strict-ssl false
Verified OpenSSL version (openssl version)
Temporarily disabled antivirus/firewall
Tried yarn install instead of npm install
None of these steps resolved the issue.
My Questions: Could this error be due to corrupted OpenSSL libraries or a broken Node installation? Is there a known issue with specific cipher configurations on Windows 11? Are there environment variables or system settings that could affect SSL cipher operations for Node/npm? System Info: OS: Windows 11 (fully updated) Node.js: v18.18.2 npm: 9.8.1 Shell: PowerShell (Admin mode) Would really appreciate any help or insight. Thank you! 🙏
According to the official python mt5 documentation, the copy_rates function must create a 'datetime' objects in UTC time zone to avoid the implementation of a local time zone offset
Regardless of the time zone you use, it'll always represent the UTC timezone to get candle data. For this reason, when adding 3 hours in the now function, the dataframe displayed the value you wanted.
# Make a copy of the DataFrame
df_copy = df.copy()
import pandas as pd
result = []
for drink in order:
idx = df_copy[df_copy['Drink'] == drink].index.min()
if pd.notna(idx):
result.append(df_copy.loc[idx])
df_copy = df_copy.drop(index=idx)
ordered_df = pd.DataFrame(result)
I know this is an old post
If you use non-nullable types (like int
, DateTime
), they always have a default value (e.g., 0), so [Required]
won’t catch them being "empty."
Fix: Use nullable types if you want [Required]
to validate them:
[Required]
public int? AquiferID { get; set; }
this can happen if the variable in the ci/cd settings section is marked as protected
did you check the Protect Variable checkbox in the variable settings?
another possibility is that your feature branch needs to be marked as protected in the repository’s branch settings.
Actually there's a solution for this today, you can have Github build packages automatically for you, by using PyDeployment: https://github.com/pydeployment/pydeployment
There's a handy set of starter templates for each of the major toolkits:
But it's good for scripts too!
Note of Caution: The devil is in the details! Packaged python apps have their own special quirks on each platform.
You can just use the r flag:
$newstring = $oldstring =~ s/foo/bar/gr;
Try
encoding = "latin1"
@Mahrez, it seems the same error so nothing changed. I test the code and the problem is that the form is not valid. Your answer is when the form is valid before to apply the save code. please, could you check again or anyone can help me, please.
def Insert_group(request):
print(f" The begining Request method is : {request.method}")
sEtat = "crea"
data = {
"created_at": datetime.today(),
"updated_at": datetime.today(),
"UTIL_CREATION": settings.WCURUSER,
"UTIL_MODIF": settings.WCURUSER,
"Soc_sigle": settings.WSOCGEN,
}
if request.method == 'POST':
print('Yes i am in POST method')
LibELE_GROUPE = request.POST.get("LibELE_GROUPE")
form = f_groupe_userForm(request.POST)
print(request.method)
if form.has_changed():
print("The following fields changed: %s" % ", ".join(form.changed_data))
if form.is_valid():
groupe = form.save(commit=False)
groupe.updated_at = datetime.today()
groupe.created_at = datetime.today()
groupe.UTIL_CREATION = settings.WCURUSER
groupe.UTIL_MODIF = settings.WCURUSER
groupe.Soc_sigle = settings.WSOCGEN
if LibELE_GROUPE is not None:
print(f"LibELE_GROUPE value is : {LibELE_GROUPE}")
if 'Ajouter' in request.POST:
print('Yes we can insert now')
groupe.save()
print('insert successful!!!')
return HttpResponseRedirect("CreateGroup/success")
else:
return HttpResponseRedirect("CreateGroup")
else:
# In reality we'd use a form class
# to get proper validation errors.
return HttpResponse("fields libelle is empty!")
# "Make sure all fields are entered and valid.")
## Process the form data
# pass
# return redirect('success')
else:
#print('form pas valide')
print("The following fields are not valid : %s" % ", ".join(form.errors.as_data()))
return render(request, 'appMenuAdministrator/L_liste_GroupeUtilisateur/FicheCreaGroupe1.html', {'form': form})
else:
form = f_groupe_userForm()
data = data
# print(f"La valeur de libellé est : {LibELE_GROUPE}")
return render(request, 'appMenuAdministrator/L_liste_GroupeUtilisateur/FicheCreaGroupe1.html', {'form': form, 'sEtat': sEtat, 'data': data})
15/May/2025 15:19:43] "GET /static/css/all.min.css HTTP/1.1" 404 1985
The begining Request method is : GET
[15/May/2025 15:19:45] "GET /AccessAdmin/Insert_group HTTP/1.1" 200 11230
[15/May/2025 15:19:45] "GET /static/css/all.min.css HTTP/1.1" 404 1985
The begining Request method is : POST
Yes i am in POST method
POST
The following fields changed: LibELE_GROUPE
The following fields are not valid : UTIL_CREATION, UTIL_MODIF, Soc_sigle, created_at, updated_at
[15/May/2025 15:19:55] "POST /AccessAdmin/Insert_group HTTP/1.1" 200 11181
[15/May/2025 15:19:55] "GET /static/css/all.min.css HTTP/1.1" 404 1985
I had the same error. I solve it with using getAbsolutePath from https://storybook.js.org/docs/faq#how-do-i-fix-module-resolution-in-special-environments
This resolved itself after a reboot
https://github.com/firebase/flutterfire/issues/13533
may be this will work, I lost a whole day
Oftentimes, a variety of tests are used to get the best of both worlds. Local tests will be run first since they are the easiest and most efficient. Then, on the server side, tests can be run pre-merge and sometimes post-merge as well. Of course, exactly which tests and the extent of testing is based on the scenario.
On Google Sheets you can copy the rows with Ctrl + v
and paste them with Ctrl + Shift + v
.
This is a known issue: https://github.com/InsertKoinIO/koin/issues/2044. Try using the latest version v4.0.4
. You can find version history here.
If it does not work with the latest version, downgrade to v3.5.6
(reference) and wait for stable v4.1.0
release.
You can play with the opacity, but without reducing it completely, which would render the button insensitive to hover.
.btn {
opacity:0.01;
}
.btn:hover {
opacity:1;
}
This does not prevent putting in a transition.
Having encountered this myself, I presume you're running a relatively current version of Composer compared the the rest of your packages. The error is due to your version of Symfony being very outdated and as a result the sensio/distributionbundle post-install hooks now pass invalid data back to Composer.
Downgrading Composer to 2.2.x should be an old enough version to mean the install works, though it'd be a far better idea to remove the reliance on sensio/distributionbundle which has been archived and unsupported for many years now.
See this answer to the crosspost at cstheory for the proof.
i dont know if this answer you question
i have done many experiment and this is what i found
you can try this link from maxwin12
hope this can answer your question
AWS Managed Microsoft AD currently takes daily snapshots automatically, there's also an option to take up to 5 manual snapshots.
According to JSFiddle it does show:
As others pointed out if you change border-top
and border-right
to something other than white that might help as well.
Try changing the types of the new column to the following:
{
"name" : "new_field",
"type" : ["null", "string"],
"default" : null
}
"null" refers to a data type and null to the null value.
As of today, the guest cannot see or use GitHub Copilot in the right bar, they can only do so with the extension.
everything is simpler, in fact it swaps the address lines of banks depending on the configuration bit
I had the same problem here, the solution was to change the kubernates file to the java I wanted (in my case 17).
If anyone else has a similar problem, check your kubernates or dockerFile file, thanks for the topic.
Is there any other data on the sheet? If not, then this will give you the number of rows in the Table whether it's filtered or not and you simply add to it the number rows used in your header and/or extra rows above the Table:
Sheets("Sheet1").ListObjects("Table1").DataBodyRange.Rows.Count
If your chat is between 2 users only and it won't change, chat table is extra. You can keep you message as:
ID (long) IdSender (INT) (FK) IdReceiver (INT) (FK) Message (TEXT)
But make some indexes. You've made absolutely normal and universal structure for small database. Even if you have 3, 100 or 1 user in chat, you always need to keep link to sender and to chat.
But better... Don't keep messages in database. You can save posts, comments, but not every single message. Even if you make goos bw-trees indexes, it will become laggy. Use special services for this (for example, special files or storages)
The query parameter uses JMESPath
to get attributes. With that you run the following command to only return the plan name:
aws backup get-backup-plan \
--backup-plan-id {plan_id} \
--query BackupPlan.BackupPlanName
I know this is an old post but I had this happen after upgrading Visual Studio 2022.
It was giving the ambiguous reference for System.Net.Http. (This relates a little to another post around negut package.)
Insight to project: REST API is being build using .net framework 4.6.2. (important to issue)
Visual Studio 2022 updated and added .net framework 4.6.1, which made that component being found 2 locations that both had that System.Net.Http.
I ventured in the location where the two references were and was able to link the date to that addtion/VS Update. (Right click on the area with ambiguous reference and it shows you were they are)
So the fix in my scenario was to delete the newly added 4.6.1 framework that was added from the VS update.
This may/may not come up for newer versions of visual studio updates.
Hope this is useful for someone.
As said in the "How can I configure Codeblocks to not close the console after the program has finished?" question:
Project -> Properties -> Build targets
. You should see a checkbox labeled:Pause when execution ends
somewhere there. Your application type must beConsole application
.
I was getting the same issue using @tanstack/[email protected]
.
Setting cacheTime: 0
solved it for me.
In version 5, cacheTime
has been renamed to gcTime
as per https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#rename-cachetime-to-gctime
I found a solution by setting:
for i in range(0,dag.params["input_number"]):
This way the dag is created with all the tasks from 0
to 12
, as set by default in params
but when I run it it gets the input I give for that run, which can be lower than 13
.
Thank you for your answer! It was really helpful
Now I can send a recorded file to TV by chunks. It is strange because TV recognize voice when I send file by 120 bytes but TV can't recognize if I send by 119 bytes. Maybe you know this issue and can help. However I can do it with recorded wav file.
My next question is about realtime audio stream. Do you know how it can be implemented?
I will be really grateful for your additional help
I'm sorry I may not be explaining this well. We have a field ltd that will have a 2 digit currency value. I want to verify the field is any valid 2 digit currency, for example 12000.00, 23.55, 23910.01 would all be valid. 1.1, 24552.134, etc would be invalid.
I tried this in my test environment I also facing the same issue, where keycloak failed to connect to the database.
After troubleshooting I got the main issue was incorrect service dependencies and connection setup b/w keycloak and mysql in docker on azure, especially when the containers start.
I have used a simple docker-compose.yml
(instead of dockerfile) and setup with the official images and proper healthchecks, make sure the keycloak only starts after mysql is ready.
docker-compose.yml
file like this:
version: "3.8"
services:
mysql:
image: mysql:8.0
container_name: keycloak-mysql
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: keycloak_db
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 5s
timeout: 5s
retries: 5
ports:
- "3306:3306"
volumes:
- mysql_data:/var/lib/mysql
keycloak:
image: bitnami/keycloak:24.0.4
container_name: keycloak
depends_on:
mysql:
condition: service_healthy
environment:
KEYCLOAK_DATABASE_VENDOR: mysql
KEYCLOAK_DATABASE_HOST: mysql
KEYCLOAK_DATABASE_PORT: 3306
KEYCLOAK_DATABASE_NAME: keycloak_db
KEYCLOAK_DATABASE_USER: keycloak
KEYCLOAK_DATABASE_PASSWORD: password
KEYCLOAK_ADMIN_USER: admin
KEYCLOAK_ADMIN_PASSWORD: admin
ports:
- "8080:8080"
volumes:
mysql_data:
Save the file and then > docker-compose up -d
the above process is working correctly it will also connect when MySQL database is ready before connecting to keycloak, i also added the health check to MySQL container the docker understand when MySQL is up and running. using the depends_on setting docker waits to start keycloak until MySQL is connections this process avoids making custom dockerfiles, if something is missing ot misconfigured it will be tricky and may cause errors. finally, by using PV for MySQL we make sure the database keeps its data even if the container restarts.
Make sure azure vm firewall and network security groups allow inbound traffic on port 8080.
Ensure docker is properly installed and running on your vm, you can view logs with docker-compose logs -f
to check container startup progress
I'm not 100% sure I understand your issue, but you should definitely assume that it is possible for the password to be read by other JS running on the same page.
Check whether there is any token have set with your account or not, the same have been happened with me also.
Do we have any idiom for creating one or more item resources at once?, without affecting existing sibling resources?
Exactly the same
The 201 (Created) status code indicates that the request has been fulfilled and has resulted in one or more new resources being created. The primary resource created by the request is identified by either a Location header field in the response or, if no Location header field is received, by the target URI. -- RFC 9110
Emphasis added
So single POST Request, server produces multiple resources, HTTP response headers identify the primary resource created, is all straight forward.
The response body... I don't think there are any fixed standards on how to describe the created resources in the response body. On the web, it would normally just be an HTML page showing the human a bunch of links....
A bug report has been submitted for this issue under https://forge.typo3.org/issues/106707
Flutter Mapbox v4 supports high-performance vector tiles, offering smooth, interactive maps with customizable styles and layers. Perfect for mobile apps, it ensures fast rendering, offline capabilities, and crisp visuals at any zoom level. Build beautiful, responsive maps using Flutter’s flexible UI and Mapbox’s powerful vector tile technology.
Is there a better way to nest if blocks in google spreadsheets ?
Yes. Use the SWITCH function. Here is an example using your code:
=SWITCH(H4
1, "CORRECT",
2, "CORRECT",
3, "CORRECT",
4, "CORRECT",
"Incorrect")
The last entry is the default, if H4 does not = 1, 2, 3, or 4 then the last entry is the result.
It is very similar to the standard programming function called SELECT CASE.
Can you please double check your entity definitions?
I also had a similar issue, and after hours of debugging I found that I need to enable the synchronize attribute to allow typeorm to consider a given entity for migration.
The entity decorator should look like:
@Entity({ name: "table_name", synchronize: true })
If you want to run typeorm cli on typescript prrojects, you can install a wrapper for that purpose using the command below
npm install typeorm-ts-node-commonjs
After installing this, you can run commands with typesacript based data sources too
For ex, to generate a migration, once can use below command:
npx typeorm-ts-node-commonjs migration:generate path/to/migration -d path/to/typescript/datasource
It looks like you are running into a known issue with the latest Redshift driver. According to the description in this issue you can still use the latest Liquibase release (4.31.1) by downgrading the driver to a previous version. You can download older drivers here: https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-previous-driver-version-20.html
I solve in this way:
.ui-datepicker {
z-index: 9999 !important;
}
.ui-datepicker.ui-widget {
position: absolute !important;
}
Hi I don't know how Hyperledger works, but here is an idea, each file has a unique sha256 code that can be added to the blockchain is easier and cheaper, and the file remains private. Only people that have the file can check its hash against the hash that is stored on the blockchain and thus proving the authenticity of the file. I found a site that does this on Ethereum : doc2block, it allows you to add files to ethereum blockchain
i had faced this issue
check if you had already downloaded the Dev Containers extensions in your VS Code
and that should be in updated version
once downloaded once run docker ps in terminal
if the error still persists then try attaching manually
In a job, you can use the stage "Move files", setting source directory and destination directory with "Wildcard (RegExp)" field empty (in case there are no restrictions on file type).
In a transformation, you can use the stage "Process files". Read two input parameters ("Dir source" and "Dir destination") with the stage "Get variables", the list of files you want to move with the stage "Get file names" where "Filename is defined in a field?" is checked and "Get filename from field" is filled with your "Dir source" parameter and two stages "Concat fields" to build full source path and full destination path that "Process files" will process
Below the transformation design screenshot.
Best regards
Just on the off chance can you advice which files were missing and where you moved them to? I've got the same issue and tried various options to publish the files, but not getting anywhere.
figured out, it needs to wait support on below new spec, in which it is not in JAVA MCP SDK yet, while it is python and typescript support today.
https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http
In your Start Method change
ReturnToMainMenuIsOpen = ReturnToMainMenuObject.activeSelf;
in ReturnToMainMenuObject.SetActive(false);
Please update, it now supports Vue 3.
npm install vue-pivottable@latest
# or
npm install [email protected]
In v7 they added web support, so it would work for now https://reactnavigation.org/docs/web-support/
React navigation v7 added support for web https://reactnavigation.org/docs/web-support/
Try select different color for drawing, not black оn white, and it will work.
This code works:
import SwiftUI
import PencilKit
import Vision
struct HandwritingRecognizerView: View {
@State private var canvasView = PKCanvasView()
@State private var toolPicker = PKToolPicker()
@State private var recognizedText = ""
@State private var isRecognizing = false
var body: some View {
VStack {
HStack {
Button("Recognize") {
recognizeHandwriting()
}
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(8)
Button("Clear") {
canvasView.drawing = PKDrawing()
recognizedText = ""
}
.padding()
.background(Color.red)
.foregroundColor(.white)
.cornerRadius(8)
}
.padding()
Text(recognizedText)
.font(.headline)
.padding()
.frame(maxWidth: .infinity, alignment: .leading)
.background(Color.green.opacity(0.1))
.cornerRadius(8)
.padding(.horizontal)
PencilKitCanvasRepresentable (canvasView: $canvasView, toolPicker: $toolPicker)
.onAppear {
toolPicker.setVisible(true, forFirstResponder: canvasView)
toolPicker.addObserver(canvasView)
canvasView.becomeFirstResponder()
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
}
}
func recognizeHandwriting() {
isRecognizing = true
// Convert PKDrawing to UIImage
let image = canvasView.drawing.image(from: canvasView.drawing.bounds, scale: 1.0)
// Create a request handler
guard let cgImage = image.cgImage else {
print("Could not get CGImage from UIImage")
isRecognizing = false
return
}
// Important: Create the request with the recognition level set to accurate
let request = VNRecognizeTextRequest { (request, error) in
if let error = error {
print("Error: \(error)")
isRecognizing = false
return
}
guard let observations =
request.results as? [VNRecognizedTextObservation] else {
print("No text observations")
isRecognizing = false
return
}
// Process the recognized text
let recognizedStrings = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
DispatchQueue.main.async {
self.recognizedText = recognizedStrings.joined(separator: " ")
self.isRecognizing = false
}
}
// Configure for handwritten text
request.recognitionLevel = .fast//.accurate
request.recognitionLanguages = ["en-US"]
request.usesLanguageCorrection = true
// THIS IS THE KEY: Set recognition to handwriting mode
request.recognitionLevel = .accurate//.fast.
request.recognitionLanguages = ["en-US"]
request.customWords = ["o3Draw"] // Add custom words that might appear in your app
// Very important - set this to true for handwriting
if #available(iOS 16.0, *) {
request.automaticallyDetectsLanguage = false
request.revision = VNRecognizeTextRequestRevision3
}
DispatchQueue.global(qos: .userInitiated).async {
do {
let requestHandler = VNImageRequestHandler(cgImage: cgImage, options: [:])
try requestHandler.perform([request])
} catch {
print("Failed to perform recognition: \(error.localizedDescription)")
DispatchQueue.main.async {
self.recognizedText = "Recognition failed."
}
}
}
//---
}
}
// PencilKit Canvas SwiftUI wrapper
struct PencilKitCanvasRepresentable: UIViewRepresentable {
@Binding var canvasView: PKCanvasView
@Binding var toolPicker: PKToolPicker
func makeUIView(context: Context) -> PKCanvasView {
canvasView.drawingPolicy = .anyInput
canvasView.alwaysBounceVertical = false
canvasView.backgroundColor = .clear
canvasView.isOpaque = false
return canvasView
}
func updateUIView(_ uiView: PKCanvasView, context: Context) {
// Updates happen through the binding
}
}
// For iOS 16+ specific optimizations for handwriting
extension VNRecognizeTextRequest {
@available(iOS 16.0, *)
var revision3: Int {
return VNRecognizeTextRequestRevision3
}
}
#Preview {
HandwritingRecognizerView()
}
This is still broken in datagrip 2024.2.2 Build #DB-242.21829.162, built on August 29, 2024
afadsfasdfasdfasdfasfdsfasfadf
Adding this setting in settings.json turns on inline variable display
"debug.inlineValues": "on",
shuni chizishish kere. phtone idle dasturidaa. cod bilan. yordam ber
The default retention period when set from the console is 7 days. Which is what you currently see. The only way I've found to change this is to disable cross-region backup replication and then enable it again, this time setting the desired retention period in days.
During the period of time when the backup replication is disabled no changes will be replicated, but disabling current replication will not delete existing replicated data.
Download lastest version from official website and install,
updated to new version. for verfication check in GIT BASH
It worked for me..!
Anyone who uses the Adobe Photoshop App can get any image changed from any format to RGB
Edit -> Mode -> Choose RGB.
It can also be done by using the Shell command: convert cmyk images to rgb in folder use shell command
You can access rows separately and apply a replace only to each row, respectively:
workbook = Workbook('test.xlsx')
def replace_in_row(worksheet, row, old_value, new_value):
for cell in worksheet.getCells().getRows().get(row):
if old_value in str(cell.getValue()):
cell.setValue(str(cell.getValue()).replace(old_value, new_value))
worksheet = workbook.getWorksheets().get(0)
replace_in_row(worksheet, 0, "one", "two")
replace_in_row(worksheet, 9, "one", "five")
or alternatively - adding to @MahrezBenHamad's answer - determine the column range and
max_column = worksheet.getCells().getMaxColumn()
worksheet.getCells().replace("one", "two", ReplaceOptions(), CellArea(0, 0, 0, max_column))
worksheet.getCells().replace("one", "five", ReplaceOptions(), CellArea(9, 0, 9, max_column))
استعادة البرنامج الاساس onReceivedHttpError يحصل دائمًا على خطأ 404 (errorResonse.getStatusCode() == 404)، ولكن عنوان URL يعمل بشكل جيد حتى على Chrome
there is a workaround that still uses MapStruct but avoids @AfterMapping or a full custom mapper implementation.
@Mapping(target = "id", expression = "java(context.getId())")
Target sourceToTarget(Source source, @Context IdContext context);
Issue was that I didn't have
using Microsoft.AspNetCore.Components.WebAssembly.Hosting;
var builder = WebAssemblyHostBuilder.CreateDefault(args);
// This line
builder.Services.AddScoped(http => new HttpClient
{
BaseAddress = new Uri(builder.HostEnvironment.BaseAddress)
});
await builder.Build().RunAsync();
in my code and only registered an httpclient in the server. This messed something up in the @inject HttpClient Http
that was not visible in the f12 console for some reason.
Resolved by myself.
Laravel uses PHP hash for storing passwords.
So I run the following code to get Hash and put it in DB
<?php
$plaintext_password = "newPassword";
$hash = password_hash($plaintext_password,
PASSWORD_DEFAULT);
echo "Generated hash: ".$hash;
?>
You can run it on OneCompiler
Aaand now I've finally found the answer here: How I can ignore PathVariable conditionally on swagger ui using springdoc openapi
Right now, I'm doing this from the constructor of my SwaggerConfig class, and that works:
@Configuration
@ComponentScan
class SwaggerConfig {
init {
SpringDocUtils.getConfig().addAnnotationsToIgnore(AuthenticatedUser::class.java)
}
.
.
}
Feels a bit smelly to do so statically in a constructor, maybe there's a better place for this?
I would like to point that despite the AWS Lambda Limits of Payload data being 6MB, in practice your file may be encoded into Base64 which would mean that your limit becomes 4.5MB because the Base64 encoding results in 33% size increase.
I hit this exact issue recently and seem to be able to have overcome it.
In the Enterprise appliction under single-sign on there is a section for adding in option claims, only when adding something here did it work for me. The token configuration of the app registration itself had no impact on the values passed back in the JWT.
In my case I have used the email field but called it userprincipalname as my app will be getting tokens from both Entra ID and External Entra ID of which this is the only like for like claim I could use.
I hope that makes sense but let me know if not.
I managed to fix that by reseting cache:
yarn start --reset-cache
how did you make it work with LWC? I've tried it and didn't have luck so I had to use Aura cmp
After hours of debugging, I found what the issue was. Each of my entity class was decorated with the below decorator
@Entity({ name: "table_name", synchronize: false })
I mark all the entities with synchronize: false
to avoid any accidental schema changes, as if I enable the synchronization on data source level, it will apply to all the entities (or tables).
Setting this to synchronize: true
enabled typeorm to consider this entity for migration, and thus, I were able to generate migrations for all modified entities by temporarily enabling this synchronization attribute for entities requiring migration script to be generated automatically.
It's doable today with Customizable select elements.
https://developer.mozilla.org/en-US/docs/Learn_web_development/Extensions/Forms/Customizable_select
Is the FDW host IP 10.252.86.4 reachable from the source PostgreSQL server?
This could likely be the root cause, the source PostgreSQL server (a managed Azure PaaS) cannot directly connect to that private IP.
Azure PostgreSQL managed instances have outbound restrictions:
You cannot SSH into the managed instance to test connectivity, but you can try:
Using an Azure VM in the same subnet/VNet as the source server to test connectivity to the foreign IP.
If the two servers are in different VNets, set up VNet peering or use public endpoints.
Try using the public IP of the target PostgreSQL in the FDW server definition instead of private IP
Since external ODBC connects work, the public endpoint is reachable. Example:
CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'public-ip-or-dns-name-of-target', port '5432', dbname 'target_db');
Make sure the firewall on the target server allows the IP of the source server or Azure subnet:
Check firewall rules on the target Azure PostgreSQL to allow inbound connections from the source server’s outbound IP.
If security requires private IPs only:
Consider setting up Azure Private Link / Private Endpoint and ensure both servers are in peered VNets with proper routing.