For posterity sake, I was finally able to track down an answer to this issue. The documentation is not aligned with the actual behavior of the API.
From Michael Gurch at Google
Apologies for the confusion surrounding this issue. I was able to speak to a member of the product team on this. They informed me that the limit is 1,200 phrases total across all PhraseSets referenced in a single recognition request. All the phrase sets are merged into one prior to the validation essentially, and then processed.
I have requested that they update the documentation related to quotas to align with this constraint https://cloud.google.com/speech-to-text/v2/quotas#adaptation
I further requested they update the migration guide from v1 to v2 to call out this change in constraints that was introduced as it can break existing implementations like it did to mine. https://cloud.google.com/speech-to-text/v2/docs/migration
It happened to me before, the table was occupied due to a db trigger that did some queries. Indexing resolved it.
I get frequent and varied error messages under "Summary of failures for Google Apps Script" I haven't seen my actual script process FAIL but i get all kinds of messages like this.
Exception: Limited Exceeded: Gmail
Server error occurred. Please wait and try again - i get this one A LOT. sometimes multiple times a day although my failure notice is set for once a week.
Settings -> Tools -> Emulator -> Synchronize Clipboard
Source: https://issuetracker.google.com/issues/227658377#comment4
testImplementation("androidx.compose.ui:ui-test-junit4-accessibility:1.9.3")
composeTestRule.enableAccessibilityChecks()
I see that this is an older problem, but to log in to the vault through C# do I need to have an API license or is it enough to have a classic M-files license? And what is the exact address for logging in to the vault please? I am stuck on this problem.
I digged around and even though if the service is using FOREGROUND_SERVICE_SPECIAL_USE flag. The android is still checking if the uid is allowed to use it or not. So I think Android disabled this as well or rather are keeping it for exceptions for their own internal use. If you want to dig around further check this out:
https://android.googlesource.com/platform/frameworks/base/+/main/core/java/android/app/ForegroundServiceTypePolicy.java
If you are copying multiple lines and want to paste to a running Gdb, here is one method:
Create a Gdb script file with your commands
print var1
print var2
finish
next
Source it from Gdb
source ~/path/to/your/file.gdb
The php artisan optimize:clear command does not automatically reload the cache, it only clears it.
To recharge it, you have to run the php artisan config:clear command
You can use positioning relative to the bottom.
position: absolute;
bottom: 100%;
When comparing HTTP vs HTTPS, the difference in performance goes far beyond security, HTTPS is now faster, safer, and SEO-friendly.
Originally, many believed HTTPS would slow websites due to encryption, but with modern protocols like HTTP/2 and TLS 1.3, HTTPS actually improves page loading speed. These technologies enable multiplexing, header compression, and faster data transfer, making HTTPS sites perform better than traditional HTTP ones.
In addition, Google prioritizes HTTPS websites in search rankings, enhances user trust through the padlock icon, and ensures data integrity during transmission. So, switching to HTTPS isn’t just about encryption it’s a direct boost to both performance and SEO visibility.
Can we embede public facebook profile into iframe ?
The <App> parameter is utilized to locate the application's reference assembly. The issue stems from the fact that App is a Razor page rather than a class. Visual Studio occasionally fails to determine the namespace for a Razor page, which results in the error.
This has also been tested on VS 2026, with the same outcome.
Pending a resolution from Visual Studio, the workaround is to create the .cs code-behind file.
Windows Server 2019 Core RDP session limit may not work due to configuration issues. Multiple users can connect simultaneously if group policies or licensing settings aren’t correctly applied, bypassing session restrictions.
You might want to check out this project: https://ionicvoip.com/ — it could be useful.
The difference may have to do with having an interactive session. When you run ssh in an interactive shell, you'll be connected to a TTY, but automated scripts run by cron will not. Moreover, Bash will read from ~/.bash_profile on an interactive shell and ~/.bashrc for a non-interactive one, which can lead to subtle differences in the environment.
You might try debugging ssh by adding the -vvv option and capturing the output and see what's different between the manual and cron runs. Another thing to try would be the -t option for ssh.
Truly learning a command language interpreter, especially one with as large of a manual as Bash has, can take years. In the hopes of accelerating that learning for an up-and-coming scripter, I thought I'd share these suggestions unrelated to your query:
Your grep argument looks like a glob, but grep uses regex and yours will match files that have daycount anywhere in the name (though there must be one character before). You probably want grep '^\.daycount.+$'; see regex101.com for details on how these differ.
But you don't actually need grep: c=$(ls -1d /tmp/.daycount* | wc -l) will do. This uses globs to select the files. The -d option ensures directories matching this pattern are only one line; the -1 is implied, but I make it explicit here.
Using rm /tmp/.daycount[0-9] further limits how many files might be deleted to just those 10 possible files: /tmp/.daycount0 through /tmp/.daycount9.
You can omit both exit commands, as there is an implicit exit at the end of every script.
It's considered safer to use SSH keys rather than passwords when practical, and they come in handy when running automated scripts as no password is required. You can set one up by doing the following (only needs to be done once):
ssh-keygen -N '' # Create the key
ssh-copy-id [email protected] # Authorize (install) the key
Then you can do ssh [email protected] reboot without ever being asked for a password.
Is it possible to configure cron on 192.168.0.1? If so, you can completely avoid the SSH step that is causing problems by having the script run locally.
Note: You can escape characters like " and \ in a "" string (e.g. "\" \\"), but you cannot escape anything in a '' string, not even ' – any \ gets passed unmodified.
You were close! This will do it:
import json
your_dict = json.loads(example.schema_json())
@TatuLund answer is correct.
I just want to add a working example:
@Route("validate-example")
@PageTitle("Single Field Validation Example")
public class SingleFieldValidationView extends VerticalLayout {
private final Binder<FormData> binder = new Binder<>(FormData.class);
private Binder.Binding<FormData, Double> bindingQtaTot;
private final NumberField totQntField = new NumberField("Quantità totale");
private final Button btnCalcola = new Button("Calcola");
public SingleFieldValidationView() {
configureBinder();
configureButton();
add(totQntField, btnCalcola);
setPadding(true);
setSpacing(true);
}
@Data
private static class FormData {
private String descrizione;
private LocalDate dataCreazione;
private Double quantitaTotale;
}
private void configureBinder() {
FormData formData = new FormData();
binder.setBean(formData);
// Keep the binding reference to validate later
bindingQtaTot = binder.forField(totQntField)
.asRequired("La quantità è obbligatoria!")
.withValidator(q -> q != null && q > 0, "La quantità deve essere maggiore di zero")
.bind(FormData::getQuantitaTotale, FormData::setQuantitaTotale);
}
private void configureButton() {
btnCalcola.addClickListener(event -> {
log("Validazione quantità...");
BindingValidationStatus<Double> validationStatus = bindingQtaTot.validate();
if (validationStatus.isError()) {
String msg = validationStatus.getMessage().orElse("Errore di validazione");
Notification.show(msg, 3000, Notification.Position.MIDDLE);
log("Errore di validazione campo quantità: " + msg);
} else {
Double value = totQntField.getValue();
Notification.show("Quantità valida: " + value, 2000, Notification.Position.MIDDLE);
log("Quantità valida: " + value);
}
});
}
private void log(String msg) {
System.out.println("[SingleFieldValidationView] " + msg);
}
}
Try Updating Xcode from the AppStore if any update is available. For me I was using the latest iOS version but didn't used the latest version of Xcode so it caused the problem
Install the extension "Hide Suggestion and Outlining Margins". This returned the margin to the minimal size I had in VS 2017
I narrowed down the problem. It seems to happen when I join entities A and B, and B is a child of C and inheritance between B and C is "JOINED".
It also turned out that the error goes away if I upgrade to boot 3.1.1 (which uses hibernate 6.2.5).
Long story short: I probably bumped into an already fixed bug which is present in boot versions 3.0.0 - 3.1.0.
Anyway, the narrowed down minimal example is here
https://github.com/riskop/20251017_complicated_criteria_query_problem
Attachments are stored in /home/<username>/.local/share/signal-cli/attachments/.
It seems that you are trying to get the token using the Client credentials flow but are making the request using the Authorization code flow.
In the client credentials flow you shall not be required to initiate a connection to the authorization endpoint to get the Authorization code. Instead the call is made to the token url with the client id and client secret and the request is for the access token.
If the client ID and client secret is correct then in response you will get the Access token which can be used to initiate the connection.
Note: The Client authentication is supported by the External Oauth providers such as Azure, Okta etc..
I had the same issue, changing the certificate fixed it for me. For some reason while they hadn't changed the intermediate certificates weren't accepted anymore.
I went from Sectigo to LetsEncrypt.
── Label Info ───────────────
You can do it easily with a Row — just make the left line short and the right one expanded.
Row(
children: const [
// Short left line
SizedBox(
width: 20,
child: Divider(thickness: 1),
),
SizedBox(width: 8),
// Label text
Text(
'Label Info',
style: TextStyle(fontWeight: FontWeight.bold),
),
SizedBox(width: 8),
// Long right line
Expanded(
child: Divider(thickness: 1),
),
],
),
just upgrade Emulator SDK. Go to tools -> SDK manager -> SDK tools it solve my issue
Tried many tips, but the only working for me is:
Open about:config
Search network.stricttransportsecurity.preloadlist
Set it to false (double click on it)
It makes a security issue, but in a dedicated profile aimed to test local sites, it does the job!
You need to double check if the driver is available or not.
Go to My Computer -> Manage -> Devices and see if there is any exclamation mark. If yes, you need to install driver through Window Update.
Go to Window Update ->View optional updates -> Choose line contains "ADB USB download gadget"
Restart and you are good to go.
You can use ollama . Pull a model and run locally. Give context to the model with the prompt and you should get a proper answer
The problem was on Docker/Zookeeper environment:
ZOO_CFG_EXTRA
It doesn't exist and variables are not inserted.
This question is all over
And did you see the performance changes in your project
Well, it depends on what (and how) you are doing with that data. If you are doing stuff in the compose block, then you are going to see multiple recompositions if you didn't `remember` it. Anyway, you are supposed to use a view model for these kinds of stuff.
`rememberSavable` (by default) will also not help you in this case, as it would need a custom saver that you need to pass to make it "remember" across config changes or recompositions.
Firstly, you can't use an OAuth 2.0 request path with a clientID in Forge apps. You must only use a request path with the URI structure that is valid for Forge apps, as stated in the Authentication and authorization section of Jira's REST API documentation.
Secondly, no Forge app can "fetch Jira data across multiple installed orgs" because that is breach of the basic security principles of Forge apps and all the Atlassian apps... Tenant Isolation!. You could have found the answer to that Frequently Asked Question with a Google search of "With an Atlassian Forge app, can I fetch Jira data across multiple installed orgs?"
To use CNAM with RingOut, register your caller name with a trusted CNAM registry such as EZCNAM. Once the record propagates, your name will appear automatically on outbound RingOut calls whenever the recipient’s carrier supports CNAM lookups.
Adding this confuguration to host help me :
# Configure Docker to use the NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
# Restart the Docker daemon to apply the changes
sudo systemctl restart docker
Check nvidia appears in runtimes
docker info | grep Runtimes
You sould get somthing like this:
Runtimes: runc io.containerd.runc.v2 nvidia
Problem solved ! for those facing this issue in the future :
I have basically done the same thing again:
I removed the relation in the single-type
Published without it
Add again the relation
Published with it
got all on /accueil?populate[sectionTemoignages][populate][temoignages][populate]=*
Thanks anyway :)
The client credentials flow iss : "https://sts.windows.net/ instead "https://login.microsoftonline.com which is the case for login flow even though i set API version 2 , in client credentials it always get version 1 i am missing something in configuration ?
Not directly answering your question, but an alternative may be to use the FakeCar lib provided.
I had this same issue with my installation -- did you grab the code signing cert? That fixed it for me, and it doesn't automatically download to the certs folder in the layout.
Which certificate broker did you use? I have the same issue with a Sectigo certificate and might try LetsEncrypt to see if the certificate itself is related in one way or another.
The latest emulator VHAL is using the AIDL interface so no need for steps 1 and 3 which are changing the previous HIDL VHAL.
Verify that other Test properties exist and if not that means that the build flag is turned off. The flag is ENABLE_VEHICLE_HAL_TEST_PROPERTIES.
If you can detect other test properties, there might be some bug in the implementation.
I would recommend using the VHAL dump commands which will be more helpful. See FakeVehicleHardware::dumpHelp()
Git is probably not recognizing changes after adding a folder to the Simulink project path, probably because the .SimulinkProject folder is not being added to the Git repository. And I'm guessing the folder contains XML files with critical project information, including changes to the project path.
First, before I offer a solution, is your local branch properly connected to your remote branch on Metalab?? if yes, we can proceed
Have you solved it? I am also looking for a solution.
Open ctrl+shift+p
And type preferneces:open json settings
Add the code
"github.copilot.advanced": {
"debug.useNodeFetcher": true,
"debug.useElectronFetcher": true
}
And then restart the vscode
Now you can chat with github copilot
OCPP server sample implementation based on spring-boot.
All messages for all versions of OCPP are written in Java.
If you want to customize a businiss logic, implement the corresponding server handler.
The child appears taller than its content due to the default 'align-items: stretch' style set to flexbox containers. When assigning 'display: flex' to a container it also sets align-items to stretch. Also, since the flex container has a flex-direction of column, the child div stretches out to fill the available height of the parent. This is not a bug and could be changed by simply changing the align-items property on the flex container and by the solutions you've already provided.
Looked at a previous embeddedcapabilities and the current one being generated and we see CarPlay Navigation App enabled. Since our app was no longer being considered as an audio app, we could not attach the CPNowPlayingScreenTemplate
force a rebuild now that capabilities have been updated to the ones we had
import signal
import sys
def handle_sigterm(signum, frame):
"""
Signal handler for graceful shutdown.
Triggered when the process receives SIGTERM or SIGINT.
"""
print("Received shutdown signal, cleaning up...")
# Attempt to stop running browser processes gracefully.
# You can extend this list with any other browser names you use.
for b in ("firefox","edge"):
try:
stop_function(b) # user-defined cleanup function
except Exception:
# Ignore any errors during cleanup to ensure shutdown continues
pass
# Exit the process cleanly
sys.exit(0)
# Register the handler for termination (SIGTERM) and interrupt (SIGINT / Ctrl+C)
signal.signal(signal.SIGTERM, handle_sigterm)
signal.signal(signal.SIGINT, handle_sigterm)
There is no built-in member of X509Store or the X509FindType enumeration that directly says "give me the certificate currently configured in IIS". IIS does not expose the SSL binding certificate via a managed API like X509Store with a special flag or find type.
But—you can retrieve the IIS certificate by reading the SSL binding from HTTP.sys, or by querying the Windows certificate store based on the IIS binding.
How IIS Stores SSL Bindings
When you bind an HTTPS port in IIS, it stores the SSL certificate information using HTTP.sys, the kernel-mode driver. The mapping is tied to:
IP address (or 0.0.0.0 for all IPs)
Port (e.g., 443)
Certificate thumbprint
Application ID
heres 2025, and i have the same problem when reading blob in old db with Oracle Managed Data Access Client. core 23 or any other vers, within .net core.
hard to solve, but by telling ai "read blob from oracle without using oracledatareader", answer comes, and it does work.
////connection prepared
string sql = @"
DECLARE
l_blob BLOB;
BEGIN
select blob_field into l_blob
from your_table_name
where id = :id;
:BlobData:= l_blob;
END;";
using (var transaction = conn.BeginTransaction())
{
try
{
//reading config
var getBlobCmd = new OracleCommand(sql, conn);
getBlobCmd.Parameters.Add(new OracleParameter("id", id));
var blobParam = new OracleParameter("BlobData", OracleDbType.Blob)
{
Direction = ParameterDirection.Output
};
getBlobCmd.Parameters.Add(blobParam);
//read
getBlobCmd.ExecuteNonQuery();
//gets blob value here
var oracleBlob = blobParam.Value as OracleBlob;
if (oracleBlob == null || oracleBlob.Length == 0)
throw new InvalidOperationException("blob length is 0");
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
I wrote a chrome extension that does just this! https://chromewebstore.google.com/detail/line-highlighter/nffehhefkilbinmemhnhepadbeadnfep
You can see the technical implementation and source code here if you're still interested in doing it yourself: https://github.com/kylechadha/line-highlighter?tab=readme-ov-file#technical-implementation. It's open source, so enjoy!
I created a chrome extension that does just this! Check it out: https://chromewebstore.google.com/detail/line-highlighter/nffehhefkilbinmemhnhepadbeadnfep
Compares like fabs(d) < eps take the float out of floating point Hear, hear!
I spent whole day troubleshooting condition like below
But if always fails checking 2 tags together. Anyone succeeded without using above solution of splitting Control Statement one per tag ?
"Condition": {
"Null": {
"aws:RequestTag/Department": "true",
"aws:RequestTag/Name": "true"
}
}
Vote, this my code now, and still not being shown
@script
<script>
document.addEventListener('livewire:init', () => {
Livewire.on('swal-alert', e => {
Swal.fire(e);
});
});
</script>
@endscript
You can dump the clipboard history to a file using this command line tool. Then read and parse the file to get your paths.
I encountered the same problem.
i think you must supply the r explicitly,
here is the code:
instance Monoid r => Monad (Some r) where
return a = Thing mempty a
Thing r a >>= f =
let Thing r' b = f a
in Thing (r <> r') b
If you are also looking for the table to line wrap on window resize:
MyHTML class from their edited answer.table.diff {width: 300px} from _styles.wrapcolumn to a large number like 2000.• Human tuberculosis (TB) is considered one of the significant public health challenges worldwide [1], even with treatment options available [2].
• In 2018, the disease accounted for 1.6 million deaths and 10 million new cases globally, making it the leading cause of death from a single infectious agent [1].
• This lethal disease is caused by an infection with Mycobacterium tuberculosis (M. tb) [3], [4], which is part of the Mycobacterium tuberculosis complex (MTBC) [3], [5], [6] and is known for its slow growth and potential for latent infection [3].
• According to a report by the World Health Organization (WHO, 2019), approximately one-fourth of the global population is latently infected with TB [1].
• The rise of drug-resistant strains and co-infections significantly contributes to the mortality and morbidity associated with TB [1], [2].
• These factors highlight the ongoing challenges in controlling and treating tuberculosis effectively.
I know it's been a while since the question, but does it have anything to do with the closing tag of admNumberMapping that is incorrect? Also check that you don't overwrite the MaxPrecision to 1 but make it 5:
<edmMappings>
<edmNumberMapping>
<add NETType="Int16" MinPrecision="1" MaxPrecision="5" DBType="Number" />
</edmNumberMapping>
</edmMappings>
I found this error happens when you run an incompatible version of Node.js.
I was using v24.4.0, and the issue was gone when I switched to v20.11.1.
1{Helen fasil
2<+251963152608?
3<help telgram fison codi namber!
5{pgf
6{ou
7{hd63
8{heard codification $
9{local DVD codi
10{©32
11{upd
if you are still facing this issue, please don't hesitate to contact me.
Me and my team could support you. We are integrating OIDC with all kind of applications
Use
disableAutoFocus set true
<Modal
open={open}
onClose={() => { }}
disableAutoFocus={true} <<<<<<<< add
pkill -f "npm run dev" || true
This is a very old question, but I come here from time to time.
Support for std::hash for uuid was introduced in Boost starting from version 1.68.0. You no longer need to explicitly provide a template specialization.
The best one to use for client-side validation to ensure correct input would be the regex below
It allows the following: 120, 1230, 102, 1023, 012, 0123, and disallows the following: 000 and 0000. youre welcome
^(?!000$)\d{3,4}$"
If your local DEV is windows operative system the error should be simple just add the following lines on the csproj
<TargetFramework>net8.0</TargetFramework>
<RuntimeIdentifier>linux-x64</RuntimeIdentifier>
For a library project, only "version" is available:
defaultConfig {
...
version = "1.2.3"
...
}
I found a solution to this problem by setting DISABLE_SERVER_SIDE_CURSORS to True:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"USER": "mydatabaseuser",
...
"DISABLE_SERVER_SIDE_CURSORS": True, # This line
},
}
https://docs.djangoproject.com/en/5.2/ref/settings/#std-setting-DATABASE-DISABLE_SERVER_SIDE_CURSORS
New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force
git config — global core.longpaths true
git config — global core.longpaths true
References
The below environments fix my problem
# Qt environment variables
export QT_QPA_GENERIC_PLUGINS=tslib:/dev/input/event1
export QT_QPA_EVDEV_TOUCHSCREEN_PARAMETERS=tslib:/dev/input/event1
export QT_QPA_FB_TSLIB=1
I’m currently facing the same issue. Initially, I thought next-intl might be causing it, but after reading your post, it seems like the root cause might be something else. My situation is similar but slightly different: in development, everything works normally and meta tags are rendered in the <head>. However, in production, the meta tags are initially rendered in the <body> on the first page load. After navigating to another page, everything is then correctly rendered in the <head>.
You can just add your extra columns to the input table. Vertex AI will ignore them and they will be included in the output table.
This is a known issue https://github.com/magento/magento2/issues/37208
The simplest solution is to set some welcome text value in the configuration.
It was fixed in 2.4.7
In my case, I had a pod many days old.
Delete the old "logs" (buffers inside the fluentd pod) was my solution.
rm /buffers/flow:namespace_name:*
Does this return the expected results? I've highlighted cells for illustration of the first result. "Shanghai" occurs 2 times in columns C to H where "Shanghai" is in the same row in column A.
=SUM(N(IF($A$1:$A$30=K1,$C$1:$H$30=K1)))
Postgresql 18 now supports OLD and NEW in RETURNING clauses. From the manual example.
UPDATE products SET price = price * 1.10
WHERE price <= 99.99
RETURNING name, old.price AS old_price, new.price AS new_price,
new.price - old.price AS price_change;
Postgresql 18 now supports OLD and NEW in RETURNING clauses. From the manual example.
UPDATE products SET price = price * 1.10
WHERE price <= 99.99
RETURNING name, old.price AS old_price, new.price AS new_price,
new.price - old.price AS price_change;
Your original attribute, #[Given('product :arg1 with price :arg2')], did not account for the double quotes around the product name "A book".
<?php
use Behat\Behat\Context\Context;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
use PHPUnit\Framework\Assert;
class FeatureContext implements Context
{
public function __construct()
{
}
#[Given('product ":arg1" with price :arg2')]
public function productWithPrice($arg1, $arg2): void
{
// Now you can access the arguments correctly.
// $arg1 will be "A book" (without the quotes)
// $arg2 will be 5
}
}
output
php vendor/bin/behat
Feature: Product basket
In order to buy products
As a customer
I need to be able to put interesting products into a basket
Scenario: Buying a single product under 10 dollars
Given product "A book" with price 5
# The step is now found and matched.
1 scenario (1 passed)
1 step (1 passed)
0m0.00s (4.01Mb)
After doing some more research I realized that I was actually dealing with 2 different API's. The first one is my custom API and the second is the Microsoft Graph API. So, essentially it's one token per API. So, here is what I did:
Get the access token from the SPA.
Use that access token to request another token from the API authority (openId, etc..), being sure to request the scopes needed for Microsft Graph. It's best to use the default, which get's all scopes available - "https://graph.microsoft.com/.default"
Pass the new token to a Microsoft Graph endpoint, such as https://graph.microsoft.com/v1.0/me
That will get you a json string response.
Central Package Management with conditional ItemGroups seems to work for me.
Directory.Packages.props
<ItemGroup>
<PackageReference Include="Serilog" Version="4.3.0" />
</ItemGroup>
<ItemGroup Condition=" '$(TargetFramework)' == 'net8.0' ">
<PackageReference Include="Serilog.Extensions.Logging" Version="8.0.0" />
</ItemGroup>
<ItemGroup Condition=" '$(TargetFramework)' == 'net9.0' ">
<PackageReference Include="Serilog.Extensions.Logging" Version="9.0.2" />
</ItemGroup>
https://wind010.hashnode.dev/centralizing-nuget-package-references
I was able to get the program to work, at least via command line. The comments were helpful, especially stripping the program down to the minimum needed to show the issue. An explanation is below.
Root cause: I believe the culprit was a missing tk-tools library. Tk, included by default, allowed the GUI to be built and loaded but not executed. You don't have to import tk-tools and I never received an error message that the functionality was missing. The "local" or default Python instance does not include tk-tools.
Method: Retracing my steps using the history command I noticed while I entered the python -m venv command, I didn’t follow up with source activate. This meant I was still using the “local” Python instance, and libraries. It became apparent when I added-back some code deleted for this question but received an error that the referenced library was missing. The original script was created inside the venv and included tk-tools and other libraries but most of my subsequent development didn't activate the environment. Some libraries were installed on both.
Thonny: Like the command line, Thonny defaulted to the “local” python instance. Dummy me, I assumed since I had a venv structure, Thonny would have used it to execute the script. There are internet pages providing instruction to configure Thonny to use a venv but those instructions didn’t match the screens of my version. Given I had the command line working I didn’t pursue further.
I hope you are doing well. My name is Amina, and I am reaching out to discuss the possibility of publishing a guest post on your website. I would be grateful if you could kindly provide details regarding your guest posting requirements.
Specifically, I would like to know about the following:
Guest Post Policy
Do you accept guest posts on your website?
What are your content quality and niche requirements?
Links & SEO
Do you allow dofollow or nofollow backlinks?
How many backlinks are allowed in a single guest post?
Do you allow link insertion in existing articles?
Publishing & Duration
How many days does it usually take to review and publish a guest post?
For how long will the post remain live?
3 months
6 months
1 year
Lifetime
Other Details
Do you charge any publishing fees or is it free?
Any formatting or style guidelines that I should follow?
I am happy to provide unique, high-quality, and well-researched content that aligns with your website’s audience and standards.
Looking forward to your response.
Best Regards,
Amina
Years later...
Running RedHat 7.x & 8x, I found that my /etc/bashrc is sourcing the file:
/usr/share/git-core/contrib/completion/git-prompt.sh
Check https://openocd.org/doc-release/README.Windows
Most probably OpenOCD needs generic WinUSB driver instead of Segger's.
I’ve been exploring ways to make LaTeX-based academic writing smoother and recently came across an Online LaTeX Compiler that’s AI-assisted. It combines several features that can really streamline the workflow:
✍️ AI writing help directly in the LaTeX editor
✅ Grammar and clarity suggestions tailored for research writing
🖼️ Convert images or figures into editable LaTeX equations
📚 Full LaTeX IDE and compiler with unlimited access
For anyone who spends a lot of time writing papers or collaborating on LaTeX documents, it’s worth checking out. I’m curious to hear from other LaTeX users — what features would make your workflow easier, or what annoys you most about the tools you currently use?
If you want a modern and reliable option, try ezCNAM. It provides a clean REST API for real-time CNAM lookups perfect for developers integrating caller name delivery into their apps or phone systems.
Oniguruma in jq supports Perl's \Q/\E 👀
$ jq -n '"foo $ bar" as $var | "hello foo $ bar baz" | sub("\\Q\($var)\\E"; "new string")'
"hello new string baz"
I’m part of a small team building an Online LaTeX Compiler — an AI-assisted platform designed for academic writing and collaboration. The idea came from our own frustration with constantly switching between LaTeX editors, grammar tools, and reference managers while writing papers.
Our goal is to make the research writing process faster and smoother by combining:
✍️ AI writing assistance directly inside the LaTeX editor
✅ Smart grammar and clarity checks tuned for academic tone
🖼️ Image-to-LaTeX equation conversion (turn figures into editable equations)
📚 Full LaTeX IDE + compiler with unlimited access
We’d love to hear from people who work in LaTeX daily — what would make your workflow easier or less frustrating? What do you wish your current tools could do better?
We’re offering early access and genuinely looking for feedback from researchers and writers who care about improving scientific writing tools.
Thanks for reading!
— The SciScribe team 🚀
From another post, I modified my command as follows:
npx --legacy-peer-deps sv create
This gave me a new error that a file it was trying to create already existed. Following the advice given, I found that these were cache files in an .npx directory. I deleted all the cache files, and the command now works.
document.onclick = function(e){
var t = e.target;
if (t.className='bla') {
//Your code for onclick element t
};
Use
moveSelectionToEnd
Draft js has this ability built in
https://draftjs.org/docs/api-reference-editor-state/#moveselectiontoend
I use:
mit-scheme --batch-mode --load file.scm --eval '(exit)'
This mostly does what I want, although it still pops into the REPL if the script encounters an error, rather than just terminating with a nonzero exit code. (Which is what I'd like it to do, and which is what most other Scheme implementations do.)
If you look at the model documentation on HuggingFace, for 'TinyLlama/TinyLlama-1.1B-Chat-v0.6', in the 'Inference Providers' section in the right side, it will be written as 'This model isn't deployed by any Inference Provider.' - meaning that that you cannot use the model through the free, serverless Inference API provided by Hugging Face. In this case, you must use the local Inference, i.e, download the model.
This is confusing because the LangChain's HuggingFaceEndpoint is primarily designed to work with Inference APIs. This is frustrating but we have to get used to these kind of things as beginners or self-learners.
A modification to @Carlo Zanocco's answer:
This solution didn't really work for me, and there were a couple of key changes I made to make it look like the solution desired, described in the comments below.
# It's good practice to import what you need, instead of getting the entirety of openpyxl and trying to navigate to them
from openpyxl import load_workbook
from openpyxl.chart import LineChart, Reference
filename = "chart.xlsx"
workbook = load_workbook(filename)
sheet = workbook["chart"]
chart = LineChart()
# This makes the x axis and y axis values actually appear
chart.x_axis.delete = False
chart.y_axis.delete = False
chart.legend = None # This removes the ugly legend from the right of the graph
chart.title = "Chart Title"
# Changed to min_row=2 so the row value can match with the row label
# \/
labels = Reference(sheet, min_col=1, min_row=2, max_row=sheet.max_row)
data = Reference(sheet, min_col=2, min_row=2, max_row=sheet.max_row)
chart.add_data(data, titles_from_data=False)
chart.set_categories(labels)
# Right now, the values are all technically in their own series, hence being labeled separately.
# This makes them all the same color, so we can pretend they're the same line
for s in chart.series:
s.graphicalProperties.line.solidFill = "4472c4" # Microsoft Blue
sheet.add_chart(chart, "D1")
workbook.save(filename)
Before and after
(big credit goes to this thread, which got me on a great path to solving all my problems: https://groups.google.com/g/openpyxl-users/c/khC6BTqaH3Y)
This converts All Keys that are type Numeric into type Strings.
Uses @sln JSON Validation and Error detection Regex Functions.
These functions parse JSON to the specification and can be used as a Query/Replace
engine on steroids.
For the details on how this regex works, and more examples see :
https://stackoverflow.com/a/79785886/15577665
Regex Demo : https://regex101.com/r/klszdB/1
Php Demo : https://onlinephp.io/c/62561
(?:(?=(?&V_Obj)){|(?!^)\G(?&Sep_Obj)\s*)(?:(?&V_KeyVal)(?&Sep_Obj))*?\s*(?&Str)\s*:\s*\K(?&Numb)(?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Er_Obj>(?>{(?:\s*(?&Str)(?:\s*:(?:\s*(?:(?&Er_Value)|(?<Er_Ary>\[(?:\s*(?:(?&Er_Value)|(?&Er_Ary)|(?&Er_Obj))(?:(?&Sep_Ary)|(*ACCEPT)))*(?:\s*\]|(*ACCEPT)))|(?&Er_Obj))(?:(?&Sep_Obj)|(*ACCEPT))|(*ACCEPT))|(*ACCEPT)))*(?:\s*}|(*ACCEPT))))(?<Er_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*}))
Replace "$0"
Regex Comments
(?: # ----------
(?= (?&V_Obj) ) # Should be a valid object ahead
{ # Open Object
| # or,
(?! ^ ) # Here we are in a valid JSON Object
\G # Continuation anchor, where we last left off
(?&Sep_Obj) \s* # Object Separator after last match
) # ----------
(?: (?&V_KeyVal) (?&Sep_Obj) )*? # Drill down to the next key that is a Number Type
\s* (?&Str) \s* : \s*
\K # Stop recording here, match just the Number
(?&Numb)
# JSON functions -
# ------------------
(?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Er_Obj>(?>{(?:\s*(?&Str)(?:\s*:(?:\s*(?:(?&Er_Value)|(?<Er_Ary>\[(?:\s*(?:(?&Er_Value)|(?&Er_Ary)|(?&Er_Obj))(?:(?&Sep_Ary)|(*ACCEPT)))*(?:\s*\]|(*ACCEPT)))|(?&Er_Obj))(?:(?&Sep_Obj)|(*ACCEPT))|(*ACCEPT))|(*ACCEPT)))*(?:\s*}|(*ACCEPT))))(?<Er_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*}))
Php source :
<?php
// Enter your code here, enjoy!
$Rx = '/(?:(?=(?&V_Obj)){|(?!^)\G(?&Sep_Obj)\s*)(?:(?&V_KeyVal)(?&Sep_Obj))*?\s*(?&Str)\s*:\s*\K(?&Numb)(?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Er_Obj>(?>{(?:\s*(?&Str)(?:\s*:(?:\s*(?:(?&Er_Value)|(?<Er_Ary>\[(?:\s*(?:(?&Er_Value)|(?&Er_Ary)|(?&Er_Obj))(?:(?&Sep_Ary)|(*ACCEPT)))*(?:\s*\]|(*ACCEPT)))|(?&Er_Obj))(?:(?&Sep_Obj)|(*ACCEPT))|(*ACCEPT))|(*ACCEPT)))*(?:\s*}|(*ACCEPT))))(?<Er_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*}))/';
$json = '
{
"first_name":"sample",
"last_name": "lastname",
"integer" : 100,
"float" : 1555.20,
"createddate":"2015-06-25 09:57:28",
"asder" :
[
200,
100,
{
"nother1" : "here1",
"digi1" : 900e10,
"nother2" : "here2",
"digi2" : 3.14
},
400,
37
]
}';
$new_json = preg_replace( $Rx, '"$0"', $json );
var_dump( $new_json );
Output :
string(384) "
{
"first_name":"sample",
"last_name": "lastname",
"integer" : "100",
"float" : "1555.20",
"createddate":"2015-06-25 09:57:28",
"asder" :
[
200,
100,
{
"nother1" : "here1",
"digi1" : "900e10",
"nother2" : "here2",
"digi2" : "3.14"
},
400,
37
]
}"
I raised a bug report at Jetbrains, see here: https://youtrack.jetbrains.com/issue/IDEA-380831/NoClassDefFoundError-org-slf4j-LoggerFactory-when-running-in-Intellij-but-works-fine-from-the-command-line and got the solution to the issue.
The solution was to disable the module path as shown in the screenshot below. Thank you @CrazyCoder for helping me out
Update your node version:
nvm install latest
nvm use latest
node -v
and retry:
node --watch server.js