Came across this 12 years later looking for the same issue. The answer is SQLPlus can't properly handle the encoding of your input and/or output of your PowerShell terminal. Run the following before issuing your sqlplus command. This will set your text encoding to ASCII which SQLPlus can handle.
[Console]::InputEncoding = [System.Text.Encoding]::ASCII
[Console]::OutputEncoding = [System.Text.Encoding]::ASCII
Yes, but you need to know code too, to determine a output you can just look at code and see what it does.
Your code above don't make sense, "main" function don't accept variables in declaration, instead do other function and call it in "main" function.
Tokenization and generation differ across Transformer versions due to updates in tokenizers, model architectures, and decoding strategies. Changes in vocabulary, padding, or special tokens can affect output length and format. Upgraded generation methods may also modify behavior, influencing fluency, repetition, or coherence across different model versions.
To delete a session cookie with JavaScript, set it with an expired date:
document.cookie = "yourCookieName=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/;";
This works even for session cookies — it tells the browser to remove it immediately.
in 2025 with boot 3.5 when I accidentally put the main class in the default package I got an error very similar to the OPs.
I started to look into the root cause which is
Caused by: java.lang.ClassNotFoundException: io.r2dbc.spi.ValidationDepth ...
And scratched my head for a while. What the hell my project is doing with r2bdc ?!
Fortunately I found quickly this post when I searched for the error "Failed to read candidate component class:".
After founding it I realized that there was a big WARN about the default package:
2025-07-11T12:47:18.764+02:00 INFO 3534381 --- [ main] XpathDemo : Starting XpathDemo using Java 21.0.1 with PID 3534381 (/home/riskop/IdeaProjects/java_xpath_localname_namespace_uri/target/classes started by riskop in /home/riskop/IdeaProjects/java_xpath_localname_namespace_uri) 2025-07-11T12:47:18.765+02:00 INFO 3534381 --- [ main] XpathDemo : No active profile set, falling back to 1 default profile: "default" 2025-07-11T12:47:18.792+02:00 WARN 3534381 --- [ main] ionWarningsApplicationContextInitializer :
** WARNING ** : Your ApplicationContext is unlikely to start due to a @ComponentScan of the default package.
Seemingly I am blind.
Anyway the full output:
:: Spring Boot :: (v3.5.3)
2025-07-11T12:47:18.764+02:00 INFO 3534381 --- [ main] XpathDemo : Starting XpathDemo using Java 21.0.1 with PID 3534381 (/home/riskop/IdeaProjects/java_xpath_localname_namespace_uri/target/classes started by riskop in /home/riskop/IdeaProjects/java_xpath_localname_namespace_uri)
2025-07-11T12:47:18.765+02:00 INFO 3534381 --- [ main] XpathDemo : No active profile set, falling back to 1 default profile: "default"
2025-07-11T12:47:18.792+02:00 WARN 3534381 --- [ main] ionWarningsApplicationContextInitializer :
** WARNING ** : Your ApplicationContext is unlikely to start due to a @ComponentScan of the default package.
2025-07-11T12:47:19.096+02:00 WARN 3534381 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanDefinitionStoreException: Failed to read candidate component class: URL [jar:file:/home/riskop/.m2/repository/org/springframework/boot/spring-boot-autoconfigure/3.5.3/spring-boot-autoconfigure-3.5.3.jar!/org/springframework/boot/autoconfigure/r2dbc/ConnectionFactoryConfigurations$PoolConfiguration.class]
2025-07-11T12:47:19.098+02:00 INFO 3534381 --- [ main] .s.b.a.l.ConditionEvaluationReportLogger :
Error starting ApplicationContext. To display the condition evaluation report re-run your application with 'debug' enabled.
2025-07-11T12:47:19.106+02:00 ERROR 3534381 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanDefinitionStoreException: Failed to read candidate component class: URL [jar:file:/home/riskop/.m2/repository/org/springframework/boot/spring-boot-autoconfigure/3.5.3/spring-boot-autoconfigure-3.5.3.jar!/org/springframework/boot/autoconfigure/r2dbc/ConnectionFactoryConfigurations$PoolConfiguration.class]
at org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider.scanCandidateComponents(ClassPathScanningCandidateComponentProvider.java:510) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider.findCandidateComponents(ClassPathScanningCandidateComponentProvider.java:351) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ClassPathBeanDefinitionScanner.doScan(ClassPathBeanDefinitionScanner.java:277) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ComponentScanAnnotationParser.parse(ComponentScanAnnotationParser.java:128) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConfigurationClassParser.doProcessConfigurationClass(ConfigurationClassParser.java:346) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConfigurationClassParser.processConfigurationClass(ConfigurationClassParser.java:281) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConfigurationClassParser.parse(ConfigurationClassParser.java:204) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConfigurationClassParser.parse(ConfigurationClassParser.java:172) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:418) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:290) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:349) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:118) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:791) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:609) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:318) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1361) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1350) ~[spring-boot-3.5.3.jar:3.5.3]
at XpathDemo.main(XpathDemo.java:17) ~[classes/:na]
Caused by: java.lang.IllegalStateException: Could not evaluate condition on org.springframework.boot.autoconfigure.r2dbc.ConnectionFactoryConfigurations$PoolConfiguration due to io/r2dbc/spi/ValidationDepth not found. Make sure your own configuration does not rely on that class. This can also happen if you are @ComponentScanning a springframework package (e.g. if you put a @ComponentScan in the default package by mistake)
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:54) ~[spring-boot-autoconfigure-3.5.3.jar:3.5.3]
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:99) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:88) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:71) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider.isConditionMatch(ClassPathScanningCandidateComponentProvider.java:564) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider.isCandidateComponent(ClassPathScanningCandidateComponentProvider.java:547) ~[spring-context-6.2.8.jar:6.2.8]
at org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider.scanCandidateComponents(ClassPathScanningCandidateComponentProvider.java:471) ~[spring-context-6.2.8.jar:6.2.8]
... 19 common frames omitted
Caused by: java.lang.NoClassDefFoundError: io/r2dbc/spi/ValidationDepth
at java.base/java.lang.Class.getDeclaredFields0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredFields(Class.java:3473) ~[na:na]
at java.base/java.lang.Class.getDeclaredField(Class.java:2780) ~[na:na]
at org.springframework.boot.context.properties.bind.DefaultBindConstructorProvider$Constructors.isInnerClass(DefaultBindConstructorProvider.java:144) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.DefaultBindConstructorProvider$Constructors.getCandidateConstructors(DefaultBindConstructorProvider.java:134) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.DefaultBindConstructorProvider$Constructors.getConstructors(DefaultBindConstructorProvider.java:103) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.DefaultBindConstructorProvider.getBindConstructor(DefaultBindConstructorProvider.java:42) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.ValueObjectBinder$ValueObject.get(ValueObjectBinder.java:230) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.ValueObjectBinder$ValueObject.get(ValueObjectBinder.java:220) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.ValueObjectBinder.bind(ValueObjectBinder.java:77) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.lambda$bindDataObject$6(Binder.java:488) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.fromDataObjectBinders(Binder.java:493) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.lambda$bindDataObject$7(Binder.java:487) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder$Context.withIncreasedDepth(Binder.java:608) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder$Context.withDataObject(Binder.java:594) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.bindDataObject(Binder.java:487) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.bindObject(Binder.java:426) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.bind(Binder.java:357) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.bind(Binder.java:345) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.bind(Binder.java:275) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.context.properties.bind.Binder.bind(Binder.java:236) ~[spring-boot-3.5.3.jar:3.5.3]
at org.springframework.boot.autoconfigure.r2dbc.ConnectionFactoryConfigurations$PooledConnectionFactoryCondition.getMatchOutcome(ConnectionFactoryConfigurations.java:155) ~[spring-boot-autoconfigure-3.5.3.jar:3.5.3]
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:47) ~[spring-boot-autoconfigure-3.5.3.jar:3.5.3]
... 25 common frames omitted
Caused by: java.lang.ClassNotFoundException: io.r2dbc.spi.ValidationDepth
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) ~[na:na]
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526) ~[na:na]
... 48 common frames omitted
Process finished with exit code 1
If two arrays differ in size, then they are added together by broadcasting.
If the arrays have different numbers of dimensions (axes), the shape of the array with a smaller dimension is complemented by units on the left.
Thus, an array of dimension (3,) can be transformed into an array of dimension (1, 3), but not into an array of dimension (3, 1).
Les types, comme Peter Kofler, qui répondent pas à la question, mais qui donnent des conseils pour contourner la question ME GONFLENT.
just focus camera zoom on origin not both
A quick google search turned up with: https://github.com/ruudud/devdns this might be able to fix your problem?
Another very different solutions might be to use a service registry which you could always have running in your docker infrastructure while you run your services wherever. Documentation for using a service registry with spring: https://spring.io/guides/gs/service-registration-and-discovery. (There should be similar solutions out there for most of the bigger ecosystems.)
Where are you trying tonrhn the code?
What version of python are you using?
I can in sql 2016
but not in sql 2022 , why ?
This has been available if you use https://typelevel.org/cats/ on Foldable
s for a while, for example:
import cats.syntax.foldable._
val list = List(1, 2, 3, 4, 5)
println(list.sliding2)
// List((1,2), (2,3), (3,4), (4,5))
sliding3
, sliding4
, etc are also defined, see the scaladoc for more information.
did you find solution pls for this error pls
You can follow a layered architecture like Clean Architecture where:
- The Domain layer holds business logic
- The Application layer manages commands/queries (e.g., using CQRS)
- The Infrastructure layer implements persistence, messaging, etc.
I recently created a [GitHub template](https://github.com/yourusername/CsharpDDDTemplate) for this exact pattern. It includes CQRS, DI setup, Docker, Kafka, and more. Might be a useful reference to see a working example.
Use openResources(...path)
+ waitUntilLoaded()
await VSBrowser.instance.openResources('/your/folder/path');
await new Workbench().waitUntilLoaded();
It will help you to open file
182 users * 55 request is good if you consider all 55 sub request in 1 transaction and 1 user runs this 1 transaction without any think time (constant timer or anything else).
Click on Apps
Select your app
Click on TestFlight
Select a build
Click on Build Metadata
Click on App File Sizes
Here is the latest requirements as per Apple update. Screenshot-Specifications
Currently it only requires
iPhone 6.5"
iPad 13"
The answer to the above question is here...
https://support.google.com/docs/thread/356832818?hl=en&sjid=10003399125056233914-EU
The solution uses @Cooper code as follows:
function allsheetslist() {
var ss=SpreadsheetApp.getActive();
var shts=ss.getSheets();
var html='<table>';
shts.forEach(function(sh,i){
html+=Utilities.formatString('<tr><td><input type="button" value="%s" onClick="gotoSheet(%s);" /></td></tr>',sh.getName(),sh.getIndex());
});
html+='</table>';
html+='<script>function gotoSheet(index){google.script.run.gotoSheetIndex(index);}</script>';
var userInterface=HtmlService.createHtmlOutput(html)
SpreadsheetApp.getUi().showSidebar(userInterface);
}
function gotoSheetIndex(index) {
var ss=SpreadsheetApp.getActive();
var shts=ss.getSheets();
shts[index-1].activate();
}
@--Hyde then provides his solution to replace a section of the code with the following:
shts.forEach((sheet, i) => {
const sheetName = sheet.getName();
if (sheetName.match(/^(Sheet1|Sheet2|Another sheet|Fourth)$/i))
html += Utilities.formatString('<tr><td><input type="button" value="%s" onClick="gotoSheet(%s);" /></td></tr>', sheetName, sheet.getIndex());
});
This does exactly what I hoped it would. It enables me to specify the exact sheets I want in my sidebar.
I hope that this proves useful to others.
Try this command adb devices
If you don't see your device connection in the list, you can kill the processes - killall adb
And then reconnect your device. It helped me.
You must have solved the problem but I leave short answer for searchers.
In short, you cannot.
The metadata has come with Chrome 108
Here is the link. And Safari didn't introduce it.
I'm still looking for other ways to handle interactive-widget in Safari. Wish you guys good luck. Don't waste your time if the meta tag doesn't work.
Use CONCAT_WS.
WHERE CONCAT_WS(' ', firstName, lastName) LIKE CONCAT('%', 'alex', '%')
It concatenates all NOT NULL arguments from the 2nd to the end, with the first one as separator, and returns NULL only if all of them are NULL.
I was facing similar problem where gmail was working play store was working on emulator. I was able to resolve issue by just updating chrome browser from playstore of emulator.
As per the xgboost documentation (xgboost docs). Apart from the parameters such as "objective" by default points to square error, "enable_categorical" points to False, and "missing points" to nan. Apart from these parameters all others are optional which is None by default. So, you are getting none.
You can print the entire parameters, using the below code to check it out. In your given model object-based display it is word wrapped to fit the display and line limits.
# Get the default parameters
default_params = model.get_params()
# Print the default parameters
for param, value in default_params.items():
print(f"{param}: {value}")
It might be due to wrong declaration of function. Nowadays most of the India based top best mobile game development company in India were using new techniques to optimize the codes.
numpy
can be used to extract maximal value from a histogram:
import numpy as np
hist_data, _ = np.histogram(df['error'], bins='auto')
ymax = max(hist_data)
The only way to upload reports to Artillery Cloud is to use --record
with your API Key, as seen on Artillery docs.
Alternatively, you can try running Artillery from a machine on a different network (e.g. from your home network, or from a cloud instance).
I'm facing the same issue of @shiftyscales, even running in separate processes, the second instance app remains somehow alive after quitting, which makes the first instance frozen. It is not until I kill the second instance "stuck" process, the first start reacting again. Any idea why is it happening?
In my case, position: sticky
elements failing to stick is due to having height: -webkit-fill-available
in any of the element's ancestors.
Baixaryoucine: LazyVim doesn't add Perl text filters to the runtime path by default—you may need to configure it manually.
I changed the path in my .zshrc
like follows:
export PATH="~/.config/nvim/filter:$PATH"
to
export PATH="/Users/mstep/.config/nvim/filter:$PATH
Everything is working now
The maximum acceptable clock deviation between non-master FE to master FE hosts is caused by 5 seconds exceeding the default value. ntp needs to be turned on to ensure time synchronization, which is less than the default 5-second time difference.
Controlled by fe's max_bdbje_clock_delta_ms parameter.
For FE parameter description, you can check the FE configuration items:
https://doris.apache.org/zh-CN/docs/admin-manual/config/fe-config
I just went back a couple of commits and found the change. I have no idea why this happened, maybe a side-effect when upgrading react native. Anyways after setting this back to Apple Watch
I was able to run the simulator again
Hey i am having the same issue, when i open and close a modal. in one screen and goes to another screen or component and try to open another modal , the previously opened modal is flashing . i am using react native 0.79.4 and new architecture
So just some background. I needed to update a MySQL 5.7 DB to 8.0 in Docker. But I kept receiving the dreaded error below:
2025-07-11T07:39:47.498269Z 1 \[ERROR\] \[MY-012526\] \[InnoDB\] Upgrade is not supported after a crash or shutdown with innodb_fast_shutdown = 2. This redo log was created with MySQL 5.7.33, and it appears logically non empty. Please follow the instructions at http://dev.mysql.com/doc/refman/8.0/en/upgrading.html
2025-07-11T07:39:47.498314Z 1 \[ERROR\] \[MY-012930\] \[InnoDB\] Plugin initialization aborted with error Generic error.
2025-07-11T07:39:47.912827Z 1 \[ERROR\] \[MY-011013\] \[Server\] Failed to initialize DD Storage Engine.
2025-07-11T07:39:47.913225Z 0 \[ERROR\] \[MY-010020\] \[Server\] Data Dictionary initialization failed.
2025-07-11T07:39:47.913276Z 0 \[ERROR\] \[MY-010119\] \[Server\] Aborting
2025-07-11T07:39:47.914128Z 0 \[System\] \[MY-010910\] \[Server\] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.32) MySQL Community Server - GPL.
So how did I solve it:
I CANNOT EMPHASIZE HOW IMPORTANT IT IS TO GET A BACKUP THAT YOU CAN MOVE SOMEWHERE ELSE (another physical server with Docker) YOU CAN FREELY WORK WITH IT. I take a FULL copy at the slowest time of the DB (2:00am)
docker cp ACME_PRODUCTION_DOCKER_CONTAINER_NAME:/var/lib/mysql/ /tmp/acmeproddbs_mysql/
zip -q -r "/tmp/var_lib_mysql_Folder_2025-07-11.zip" /tmp/acmeproddbs_mysql/mysql/\*
!!! YOU ARE NOW ON YOUR TEST SERVER !!!
$ docker rm -f ACME_DOCKER_CONTAINER_NAME [[BE-CAREFUL TRIPLE VERIFY YOU HAVE A VALID BACKUP!]]
$ rm -r /var/lib/mysql_acme_dbs/ [[BE-CAREFUL SEE NOTE ABOVE ABOUT TRIPLE CHECKING YOUR BACKUP IS VALID]]
$ cat /etc/group | grep mysql [[IS THE mysql GROUP ALREADY EXISITNFG?]]
$ groupadd -r mysql && useradd -r -g mysql mysql [[RUN_IF_NEEDED]]
$ unzip /tmp/var_lib_mysql_Folder_2025-07-11.zip -d /var/lib/mysql_acme_dbs
$ chmod -R 777 /var/lib/mysql_acme_dbs [[MY_FRUSTRATION (ON MY SYSTEM) GOT THE BEST OF ME :( ]]
$ chown -R mysql:mysql /var/lib/mysql_acme_dbs
$ docker run [COMMAND_BELOW]
### NOT NEEDED BUT IT WILL SHOW YOU WHAT NEEDS TO HAPPEN BEFORE UPGRADE, SHOULD IT BE NEEDED.
$ docker exec -it ACME_DOCKER_CONTAINER_NAME bash
bash-4.2# mysqlsh
MySQL JS > \connect root@localhost:3306
Please provide the password for 'root@localhost:3306': MYSQL_PASSWORD
MySQL JS > util.checkForServerUpgrade()
[[REPORT AFTER CHECK IS SHOWN HERE]]
MySQL JS > \quit
###
$ docker exec -it ACME_DOCKER_CONTAINER_NAME mysql_upgrade -uroot -pMYSQL_PASSWORD
$ docker stop ACME_DOCKER_CONTAINER_NAME [[IMPORTANT YOU USE "stop" AND NOT "rm -f" SO IT WILL GRACEFULLY SHUTDOWN AND YOU WONT GET THE ERROR MESSAGE]]
$ docker rm ACME_DOCKER_CONTAINER_NAME
$ docker run [["docker run..." COMMAND_BELOW BUT THIS TIME WITH "-d mysql/mysql-server:8.0"]]
docker run -p 3306:3306 \
--name=ACME_DOCKER_CONTAINER_NAME \
--mount type=bind,src=/var/lib/mysql_acme_dbs,dst=/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=MYSQL_PASSWORD \
-d mysql/mysql-server:5.7 \
mysqld \
--lower_case_table_names=1 \
--max_connections=3001 \
--max_allowed_packet=128M \
--innodb_buffer_pool_size=128M \
--innodb_fast_shutdown=1 \
--host_cache_size=0
AT THE END YOU SHOULD SEE "MySQL 8.0"
$ docker exec -it ACME_DOCKER_CONTAINER_NAME mysql -uroot -pMYSQL_PASSWORD -v
HOPE THIS HELPS SOMEONE! Please write back here so others can use this with confidence. Ofcourse nothing here is written in stone, so change parameters to your needs. I am just pointing out the process to how I got it to work.
Thank you all for your efforts helping me to better understand that there is moore than one way to arrive at a solution that can vary between db engines. I had to use @ValNik suggestion to use a subquery in order to finish our new item information webpage presenting yearly price changes summary.
This is propably not optimal but it works and that is good enough.
SELECT MYPERIOD,MYQTY1,SALES,COST,PRICE,COSTPRICE,MARGIN,
LAG( PRICE)OVER(ORDER BY MYPERIOD) AS PREV_PRICE
FROM(
SELECT
LEFT(p.D3611_Transaktionsda, 4) AS MYPERIOD,
SUM(p.D3631_Antal) AS MYQTY1,
SUM(p.D3653_Debiterbart) AS SALES,
SUM(p.D3651_Kostnad) AS COST,
SUM(p.D3653_Debiterbart) / SUM(p.D3631_Antal) AS PRICE,
SUM(p.D3651_Kostnad) / SUM(D3631_Antal) AS COSTPRICE,
(SUM(p.D3653_Debiterbart) - SUM(p.D3651_Kostnad)) / SUM(p.D3653_Debiterbart) AS MARGIN
FROM PUPROTRA AS p
WHERE p.D3605_Artikelkod = 'XYZ'
AND p.D3601_Ursprung='O'
AND p.D3625_Transaktionsty = 'U'
AND p.D3631_Antal<>0 AND p.D3653_Debiterbart<>0
GROUP BY LEFT(p.D3611_Transaktionsda, 4)
) as T
Just check if you've one of these
MudThemeProvider
MudPopoverProvider
MudDialogProvider
MudSnackbarProvider
doubled somewhere (i.e. in your Layout-files).
It looks like TPU v6e doesn’t support TensorFlow; currently, only PyTorch and JAX are supported. https://cloud.google.com/tpu/docs/v6e-intro
It might be the casing of a folder in the path, so check the full path. "/path/File.tsx" is different than "/Path/File.tsx". The file casing is more obvious, but the folders in the path are just as important and less obvious.
This is working for me:
db_dict = {"col" : sqlalchemy.dialects.mysql.VARCHAR(v)}
If anyone's scratching their head like me in 2025 version of IDEA, they've moved it to Settings -> Advanced Settings -> Version Control -> Use modal commit interface for Git and Mercurial. And the whole thing is a separate plugin now, not sure why they consider this a feature, rather than removal of it... Context -> https://youtrack.jetbrains.com/issue/IJPL-177161/Modal-commit-moved-to-a-separate-plugin-in-2025.1-Discussion
Double check that the file is actually named Users.ts
and not users.ts
as Vercel and Git are case-sensitive.
Try renaming the file to something else, commit and push, then rename it back, that resolves case-related issues.
When I used RAND() with Synapse on a serverless pool I got the same value from RAND() on every row. My workaround for this was to use the ROW_NUMBER() as the seed for RAND(). This gave the RAND() call on each row a different seed and it definitely generated different numbers. When I used to result to partition a dataset into an 80%/10%/10% split all the partitions came out the right size, so I hope it is random enough.
SELECT RAND(ROW_NUMBER() OVER())
Windows 11 new Contrast Theme affects many apps including Eclipse, Before open Eclipse app, just turn off your Windows 11 Contrast Theme by Press left Alt + Left Shift + Print Screen to turn off your Windows's Contrast Theme, and then open your Eclipse app, all the Eclipse's own themes can now be used as you wish. Now tab out to go back to Windows, Press left Alt + Left Shift + Print Screen to turn on Windows 11 new Contrast Theme, now back to Eclipse, the system ask you to re-start the app to enable Contrast them, you click no and enjoy programming with Eclipse's theme (dark theme).
key_stroke = getch()
clears the buffer...
I observed weird behavior when performing non-blocking accept loop. When there are multiple connection requests in the queue, only the first accept attempt succeed. All subsequent attempt fail with EAGAIN.
I hope this will be helpful.
Isn't due to your Apply_Base not depending on Preparation stage? Because it seems being used in Apply_Base but only declared in Preparation.
https://github.com/fahadtahir1/pdf_renderer_api_android_with_okhttp_and_cache
wrorking example of PDF renderer API with Okhttp
How can we do it for iOS. For iOS we have issue that if we want to create nativemodule like swift file we need to do it from xcode itself and not possible from vscode. Is there any fix for it?
I found the problem. In another file of my project, I had imported pyautogui
. It seems that pyautogui
and filedialog.askdirectory()
don't work well together — the import alone was enough to cause the dialog to freeze.
To solve the issue, I removed the global import pyautogui
and moved it inside the __init__()
method (or inside the specific function where it’s used). After this change, the folder selection dialog worked correctly without freezing.
using (sampleBuffer) inside DidOutputSampleBuffer solves the problem
I think you mean to authenticate multiple users that belong to multiple tenants.
In order to do that you need to implement an extra table called TenantUsers. This will hold the mappings between the users stored in Duende and your tenants. Then you can store the connection string for each tenant in Azure Key Vault or something similar (depending on your cloud provider).
After login you can show a dropdown with a list of tenants and when the user clicks one of them then it connects to the correct connection string that belongs to it and displays the screen associated to that specific tenant.
For extra security you should also enable 2FA with Google/Microsoft authenticator apps for all your users.
Why do you have your password in the code?
You should remove the password from this thread.
To replicate the image list scroll animation like on edifis.ca using GSAP ScrollTrigger, pin the container, use scrub: true
, and animate each image block’s opacity, scale, or position inside a gsap.timeline()
synced with scroll. Use start
, end
, and pin
to control scroll behavior smoothly.
I am struggling with local sentinel developement as well.
I am using MAC. and Sentinel v0.40.0 and Terraform v1.10.5. My plan file is called `plan.json`
When using this example I always fail
# create file policy.sentinel
vim policy.sentinel
...
sentinel {
features = {
apply-all = true
terraform = true
}
}
import "plugin" "tfplan/v2" {
config = {
"plan_path": "./plan.json"
}
}
...
:wq!
# Execute file
$ sentinel apply policy.sentinel
Error parsing policy: policy.sentinel:1:10: expected ';', found '{' (and 2 more errors)
Any suggestions ?
A little update using Chart.js v4.x and taking inspiration from another thread How do I change the colour of a Chart.js line when drawn above/below an arbitary value?, you can actually have different sections using different thresholds
My code to generate a dynamic plugin based on the values. I have min/max threshold and I consider the values inside as valid, while outside invalid.
You can just call the functions with the your parameters and add the plugin to the chart.plugins array when you create it.
cjs.get_color_line_plugin = function(t_min, t_max, valid_color, invalid_color) {
return {
id: 'color_line',
afterLayout: chart => {
const ctx = chart.ctx;
ctx.save();
const yScale = chart.scales["y"];
const y_min = yScale.getPixelForValue(t_min);
const y_max = yScale.getPixelForValue(t_max);
const gradientFill = ctx.createLinearGradient(0, 0, 0, chart.height);
gradientFill.addColorStop(0, invalid_color);
gradientFill.addColorStop(y_max / chart.height, invalid_color);
gradientFill.addColorStop(y_max / chart.height, valid_color);
gradientFill.addColorStop(y_min / chart.height, valid_color);
gradientFill.addColorStop(y_min / chart.height, invalid_color);
gradientFill.addColorStop(1, invalid_color);
const datasets = chart.data.datasets;
datasets.forEach(dataset => {
if (dataset.type == 'line') dataset.borderColor = gradientFill;
});
ctx.restore();
},
}
}
Result: (not too clear, but the line is darker above the threshold)
If you're okay with ignoring SSL certificate validation disables SSL verification(e.g., for testing):
curl -k https://example.com
or
curl --insecure https://example.com
In my case, there are 2 packages: Realm
and RealmSwift
when adding Realm package to project. After removing the Realm
and just keeping RealmSwift
in Build Phases settings, my project was build successfully.
If you want to merge the changes made only to some of the files changed in a particular commit,
First, you have to find the commit-id
of the commit.
Then you can provide the file paths that you need to cherry-pick separated by space as below
git checkout <commit-hash> -- <path/to/file1> <path/to/file2>
BigQuery’s storage remains immutable at the data block level, but the key architectural change enabling fine-grained DML is the introduction of a transactional layer with delta management on top of these immutable blocks.
Instead of rewriting entire blocks for updates/deletes, BigQuery writes delta records that capture only the changes at a granular level. These deltas are tracked via metadata and logically merged with base data during query execution, providing an up-to-date view without modifying underlying immutable storage.
This design balances the benefits of immutable storage (performance, scalability) with the ability to perform near-transactional, fine-grained data modifications.
You may check this documentation.
I just want to append on Remy's answer (as I don't have enough reputation to add a comment):
Passing pointer to the result of TEncoding.Default.GetBytes
into C is highly dangerous and buggy.
The result bytes are not zero terminated.
Possible fix is zero terminating the buffer yourself, such as:
SetLength(buffer, Length(buffer) + 1);
buffer[High(buffer)] := 0;
I also missed that part, thats really helpful, thanks @ermiya-eskandary.
Based off a comment from @Jon Spring, adding spaces with paste0 fixes the issue.
This is related to a known bug: https://github.com/thomasp85/gganimate/issues/512
library(dplyr)
library(ggplot2)
library(gganimate)
library(scales)
library(lubridate)
Minimum_Viable <- tibble(
AreaCode = c("A", "A", "B", "B"),
Date = c(ymd("2022-11-24"), ymd("2025-05-08"), ymd("2022-11-24"), ymd("2025-05-08")),
Value = c(56800, 54000, 58000, 62000)
) %>%
mutate(Label = label_comma()(Value))
#contains issue
Animation_Test <- ggplot(Minimum_Viable,
aes(x = Date, y = Value, label = Label)) +
geom_line(color = "red") +
geom_point(color = "red") +
geom_label() +
labs(title = "{closest_state} Value") +
transition_states(AreaCode,
transition_length = 1,
state_length = 2) +
theme_minimal()
#use paste0 on the label to fix it.
Minimum_Viable <- Minimum_Viable %>%
mutate(Label_Workaround = paste0(" ", Label))
#now snaps to nearest value
Animation_Workaround <- ggplot(Minimum_Viable,
aes(x = Date, y = Value, label = Label_Workaround)) +
geom_line(color = "red") +
geom_point(color = "red") +
geom_label() +
labs(title = "{closest_state} Value") +
transition_states(AreaCode,
transition_length = 1,
state_length = 2) +
theme_minimal()
I'm also facing the same issue so Can you Please help me with the solution!
I have explained it in detail in my Post, here is the Link :
Eagerly waiting for your Response!
<div class="console-body"> <div class="message-area" id="message Área"> <div class="controls"> <button class="control-btn" onclick="clearMessages()">🗑️ Limpiar</button> <button class="control-btn" onclick="toggleAutoReply()" id="autoReplyBtn">🤖 Auto-respuesta</button> <button class="control-btn" onclick="exportMessages()">📥 Exportar</button> <button class="control-btn" onkeypress="handleEnterKey(event)💬⚙️ Sistema</button> </div> <div class="message system-message"> <div class="message-header"> <span>🔧 Sistema</span> <span class="message-time" id="systemTime"></span> </div> <div class="message-content"> Consola de mensajes iniciada. ¡Bienvenido! </div> </div> <div class="typing-indicator" id="typingIndicator"> <div class="message-content"> Escribiendo <div class="typing-dots"> <div class="dot"></div> <div class="dot"></div> <div class="dot"></div> </div> </div> </div> </div> <div class="input-area"> <div class="input-container"> <input type="text" class="message-input" id="messageInput" placeholder="Escribe tu mensaje aquí..." " > <button class="send-button" onclick="sendMessage()"> ➤ </button> </div> </div> </div> </div> <script>/free-code-camp/people/onkeypress="handleEnterKey(event)
I found the problem. A very good friend of mine had the idea to use the ANR Watchdog from SalomonBrys to get the logs I was missing to find the problem. Check it out https://github.com/SalomonBrys/ANR-WatchDog.
The WatchDog revealed that an infinity loop existed in an old part of the App. That loop created preasure on the main thread. After removing the loop everything works as expected and I can sleep again.
Thanks to everyone for suggesting and putting your thoughts into this.
Although old, this is the closest to what I'm looking for.
I have unstructured data (for example char 512) which I need to address as a structure. I use RTTS to create the structure and a new data reference with that structure.
The part I struggle with is finding a way to move the unstructured data to the dref (or address the original data using a typed field symbol). I want to avoid using offsets and lengths to make a conversion.
Any revelations?
I am facing same issue can you please conform if you have any answers here is the error
Restart login cookie not found. It may have expired; it may have been deleted or cookies are disabled in your browser. If cookies are disabled then enable them. Click Back to Application to login again.
The problem was solved by:
Removing the mobile/android folder.
Installing dependencies with Yarn in the root monorepo folder.
Adding the problematic dependencies and doctor configs to the root package.json file.
Configuring metro.config.js for the mobile app at the monorepo project's root level.
the metro.config.json:
const { getDefaultConfig } = require('expo/metro-config');
const path = require('path');
// Get the project root
const projectRoot = __dirname;
// Get the monorepo root
const monorepoRoot = path.resolve(projectRoot, '../..');
const config = getDefaultConfig(projectRoot);
// Configure for monorepo
config.watchFolders = [monorepoRoot];
config.resolver.nodeModulesPaths = [
path.resolve(projectRoot, 'node_modules'),
path.resolve(monorepoRoot, 'node_modules'),
];
module.exports = config;
I've met the same issue. Xcode version 26.0 beta 3 (17A5276g). It seems that Xcode 16.4 build correctly.
Using the above example, I have a requirement for an alternative flow condition to run B and then C.
So three conditions exists
A -> decider -> B (SOMECONDITION)
A -> decider -> C (OTHERCONDITION)
A -> decider -> B -> C (BOTHCONDITION)
How can the above be changed to add this additional flow?
Not working with the above still error coming. I closed and restarted Visual Studio. By the way I have Microsoft Visual Studio Professional 2022 (64-bit) Version 17.11.4
After trying many solution, installing below package fixed the problem on Ubuntu 24.04.
sudo apt install ubuntu-restricted-extras
See "dataFilter" which is a function which modifies the returned data.
While decrypting, I realized I was creating a serialized license, which was unnecessary. The serialized license is already generated during encryption and should be reused during decryption.
Please try adding "--enable-hive-sync"
final activityID = await _liveActivitiesPlugin.createActivity(activityModel,
removeWhenAppIsKilled: true);
this is work only android but IOS didin't
The endpoint `https://developer.api.autodesk.com/oss/v2/buckets/${bucketKey}/objects/${objectName}` you used has been deprecated for a while, please use direct S3 to upload:
GET oss/v2/buckets/:bucketKey/objects/:objectKey/signeds3upload
Upload the file to S3 using the pre-signed S3 URL obtained above.
POST oss/v2/buckets/:bucketKey/objects/:objectKey/signeds3upload
ref: https://aps.autodesk.com/blog/object-storage-service-oss-api-deprecating-v1-endpoints
Your Scontrino Class Contain LocalDateTime and Map<Aritcolo,Integer> with jackson the josan serializer used by springboot can not deserielised properly out of the box without help espically Map<Articolo,Integer> using entity as key
Using Long and String as a key
and refoctor your qulita to DTO's like
class AritcoloQualita{
private long aritcoloId;
private int qualita;
}
add jaskon datatype
<dependency>
<groupId >com.fasterxml.json.datatype</grouId>
<artifactId>json-datatype-jsr-310</artifacrId>
</groupId>
public Jackson20ObjectMapperBuilderCustomer jsonCustomer(){
return builder-> builder.modules(new JavaTimeModules)}
class scontrioRequestDTO{private LocalDateTime data ,
private Map<Long ,Integer> quantita}
public Scontrino cre(@RequestBody ScontrinoRequestDto dto){return Sentrio Service.cre(dto)}
This is an old thread, but for others looking for an explanation, I'm in this situation now. After looking at the branch I want to switch to at GitHub, I can see that a single line in a file I want to switch to has different content than that line on the current branch (the one I want to switch away from). Git reports "nothing to commit" because that file is ignored.
For the OP, considering the long list of files you had, and the fact that forcing things did no harm, my guess is that you modified the files in the current branch in some trivial way, like changing the file encoding or the EOL character.
There are some suggestions about handling this situation here: Git is deleting an ignored file when i switch branches
Unfortunately, my situation is more complex. I have three branches: master, dev, and test. The file is ignored in both dev and test, so I can switch between them at will. I just can't ever switch to master. I have remotes for all three branches and I'm the only developer. I'm sure there's a way to fix this without messing things up, but I'm not sure what would insure that in the future I can merge one of the other branches into master and still push master to the remote.
scan 'table_name', {COLUMNS => ["columnfamily:column1", "columnfamily:column2"], FILTER => "SingleColumnValueFilter('columnfamily', 'columnname', operator, 'binary:value_you_are_looking_for')"}
'column1' or 'column2' refer to your actual column names.
'columnfamily' is the column family you defined while creating the table.
SingleColumnValueFilter is used to apply a condition on a single column.
operator can be a comparison symbol like =, !=, <, >, etc.
'binary' is a keyword used to ensure the value is compared as binary data.
I disabled all extensions in VS 2022 and restarted it. Now, it's working without any issues.
Not an answer to your question but I am unable to comment yet. Just thought I'd chime in and say you can clean this up a bit by putting those examples directly on ErrorBody
type.
type ErrorBody = {
/**
* @example "https://someurl.com"
**/
type: string,
/**
* @example 409
**/
status: 400 | 401 | ...,
/**
* @example "error/409-error-one-hundred-and-fifty"
**/
code: string,
/**
* @example "This is an example of another error"
**/
title: string,
/**
* @example "You should provide error detail for all errors"
**/
detail: string
}
Then your endpoints can become:
@Response<ErrorBody>('409', 'A 409 error')
@Response<ErrorBody>('4XX', 'A 4xx error called fred')
I am also looking for an answer to this problem. I want all my API error responses to conform to the application/problem+json
type response that can be found in this spec. I don't want to manually write out every possible @Response
decorator though. I wish you could do something like:
@Response<ErrorBody>( ErrorStatusCodeEnum, 'An error' );
Where ErrorBody
would now have the form
type ErrorBody = {
/**
* @example "https://someurl.com"
**/
type: string,
/**
* @example 409
**/
status: ErrorStatusCodeEnum,
/**
* @example "error/409-error-one-hundred-and-fifty"
**/
code: string,
/**
* @example "This is an example of another error"
**/
title: string,
/**
* @example "You should provide error detail for all errors"
**/
detail: string
}
and TSOA would map that to all possible error codes in the enum.
Wish I could elaborate more on the matter. At hand that's not possible important information has been deleted by whom ever on other devices I've had and I'm being blocked from the information I seek I know it's there I've got some that prov I'm thinking investigator fcc local and federal
Yes, this setting is simple and effective.
It is just that the ratio you choose may be too small, and the final learning rate may be almost 0, causing the model to converge too early. For example, start_lr = 1e-2, ratio = 1e-4 ➜ final_lr = 1e-6.
Perhaps you can increase the range of ratio a little bit, for example
ratio = trial.suggest_loguniform("lr_ratio", 1e-2, 0.5)
You can make appropriate adjustments according to your experimental situation.
Please refer to it, thank you.
If your user account employs multifactor authentication (MFA), make sure the Show Advanced checkbox isn't checked.
I used this codelab and did what was said and it worked.
https://codelabs.developers.google.com/codelabs/community-visualization/#0
The path used in the manifest should be the gs:// instead of the https:// path.
def docstring(functionname):
\# Write your code here
help(functionname)
if _name_ == '_main_':
x = input()
docstring(x)
I just needed to update my browsers to the latest version .
Microsoft Edge is up to date. Version 138.0.3351.83 (Official build) (64-bit)
Chrome is up to date Version 138.0.7204.101 (Official Build) (64-bit)
View tables inside namespace:
list_namespace_tables "namespace_name"
In place of "namespace_name" type your namespace.
Ancient question... but I recently learned that if you are creating a Project-based Time Activity and you set an Earning Type in the request, Acumatica will blank out the project task.
The solution in this case is to set all fields except the project related ones, grab the ID of the created row, and follow up with a new request to set the Project and Task on just that row.
As another way, you may consider indexedDB.
If this happens in SoapUI then:
On the Message Editor, just below the request message window click on button WS-A
. Then select the checkbox Add default wsa:To
Have you checked whether the file unins000.exe
mentioned in the pop-up window exists? Also, have you tried reinstalling vscode?
We've got the same error when we tried to download results generated by our custom GPT. We've then enabled the Code Interpreter & Data Analysis
under Capabilities, and it seems to solve the issue.
I use ActivityView on Android11 successful ,but on Android12 failed. Maybe Google remove the Api from Andoird12.
/style.css"> </head> <body> <div class="container"> <h1>Nhận Kim Cương Free Fire</h1> <form action="spin.php" method="post"> <input type="text" name="ff_id" placeholder="Nhập ID Free Fire" required><br> <button type="submit">Tiếp tục</button> </form> <a href="wheel.php" class="btn">Vòng quay KC miễn phí</a><br> <a href="gift.php" class="btn">Nhận skin, súng miễn phí</a> </div> </body> </html> """ spin_php = """ <?php if ($_SERVER["
my co-worker already solved it.
I just used, py -m pip install robotframework
You might try ading this into Spring boot's application.yaml as shown here https://www.baeldung.com/mysql-jdbc-timezone-spring-boot
spring.jpa.properties.hibernate.jdbc.time_zone=UTC
Clearing SSMS cache fix the problem:
Close all SSMS instances and remove all the files in the following folders: %USERPROFILE%\AppData\Local\Microsoft\SQL Server Management Studio
or SSMS
in new versions and %USERPROFILE%\AppData\Roaming\Microsoft\SQL Server Management Studio
or SSMS
.