You can sort and filter your required workspace changes with the 'Sort' feature available in VS Code.
In my case, I had this stackoverflow due to the missing of @Service at UserDetailsService implementation:
2025-03-26T13:52:17.882-03:00 ERROR 14543 --- [image-processor-api] [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed: java.lang.StackOverflowError] with root cause
java.lang.StackOverflowError: null
at java.base/java.lang.Exception.<init>(Exception.java:103) ~[na:na]
at java.base/java.lang.ReflectiveOperationException.<init>(ReflectiveOperationException.java:90) ~[na:na]
at java.base/java.lang.reflect.InvocationTargetException.<init>(InvocationTargetException.java:67) ~[na:na]
@Service
public class UserDetailsServiceImpl implements UserDetailsService {
I'd put it and problem solved :D
I create account oracle cloud free tier then I sign in first time, require enable secure verification but not show select a method
For me helped next:
Turn on windows Developer Mode via Windows Settings (Open Windows Settings, search for Developer settings or Go to Update & Settings then For developers. Toggle the Developer Mode setting, at the top of the For developers page. Read the disclaimer for the setting you choose. Click Yes to accept the change)
Setup nvm 1.1.12 (not newest, bcs it doesn't correctly setup npm) https://github.com/coreybutler/nvm-windows/releases/download/1.1.12/nvm-setup.zip
If you don't have any installed NodeJs, then during installation specify path to nmv: D:\Programs\nvm - dont use disk C: Then installer will be asked to install nodejs, specify path to D:\Programs\nodejs - don’t use disk C:
after installation check is powershell/terminal recognizing nvm alias - if not then setup environment variables (Win+R then input: sysdm.cpl then Advanced tab) Open Path variable - edit it - input next:
d:\Programs\nvm
d:\Programs\nodejs\
install needed nodejs version: nvm install 12.19.1
activate installed nodejs version: nvm use 12.19.1
With <ReferenceLine />
and its segment
property, it's possible:
<ReferenceLine
x={14}
/>
<ReferenceLine
segment={[
{
x: 14,
y: 1200,
},
{
x: 14,
y: 1050,
},
]}
/>
Using
brew services start postgresql
will start the postgres service AND launch it on start. You should instead use
brew services stop postgresql
brew services run postgresql
To simply start the service, but not launch it later on boot
This seems to be caused, according to my personal experience, by a schema conflict. YMMV.
If you've hidden all the action buttons and have nothing left to right click, use the View: Reset All Menus
command in the command palette.
Same understanding here. Do you usually communicate through which platform? If meta wont help, we need to help ourselves :D
I found this answer which gives clear steps:
cmd+shift+p
"Configure Snippets
" / "New Global Snippets file...
"
Enter a file name for this snippet like uuid
or my_snippets
Replace the slug content by:
{ "Insert UUID": { "prefix": "uuid", "body": ["${UUID}"], "description": "Create UUID" } }
For short answer, in this case, I would use second option.
For longer answer:
Raise Exception Within Python:
Using the save method with a ValueError is checked within the python code, it can add overhead for each time save method is called and it wont evaluate inserted data.
But if you need to check more complex code which database engine cant handle or add custom exceptions, it is better to use first option.
Using CheckConstraint:
Using second option, evaluation is done by database engine, what is more efficient that python and it also can validate inserted data. In this case it would be better to use db engine to validate, to also check insert data and keep it closer to source. If check is simple enought for database engine, use CheckConstraint method.
This solved my issue
sudo apt-get install libstdc++-14-dev
This solved my issue
sudo apt-get install libstdc++-14-dev
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
public function up(): void
{
Schema::create('produtos', function (Blueprint $table) {
$table->id();
$table->string('nome');
$table->decimal('preco', 8, 2);
$table->timestamps();
});
}
public function down(): void
{
Schema::dropIfExists('produtos');
}
};
?>
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Produto extends Model
{
use HasFactory;
protected $fillable = ['nome', 'descricao', 'preco']; // Adicione todos os campos que você deseja permitir para atribuição em massa
}
<?php
namespace App\Http\Controllers;
use App\Models\Produto;
use Illuminate\Http\Request;
class ProdutoController extends Controller
{
public function index()
{
return response()->json(Produto::all());
}
public function store(Request $request)
{
$request->validate([
'nome' => 'required|string|max:255',
'preco' => 'required|numeric',
]);
$produto = Produto::create($request->all());
return response()->json($produto, 201);
}
public function show($id)
{
$produto = Produto::findOrFail($id);
return response()->json($produto);
}
public function update(Request $request, $id)
{
$produto = Produto::findOrFail($id);
$produto->update($request->all());
return response()->json($produto);
}
public function destroy($id)
{
Produto::destroy($id);
return response()->json(null, 204);
}
}
<?php
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
use App\Http\Controllers\ProdutoController;
Route::get('produtos', [ProdutoController::class, 'index']);
Route::post('produtos', [ProdutoController::class, 'store']);
Route::get('produtos/{id}', [ProdutoController::class, 'show']);
Route::put('produtos/{id}', [ProdutoController::class, 'update']);
Route::delete('produtos/{id}', [ProdutoController::class, 'destroy']);
were you able to solve this? I am facing a similar issue with servereless pyspark. It's just that I am reading a path in a zip file passed through --archives using a Java code that runs through a JAR provided using --jars file
It's not possible to get any extra information about what's unsaved in the Visio Application without making a custom method for it, by doing a full file comparison.
Perhaps surprising this formatter works for your task:
private static final DateTimeFormatter formatter
= new DateTimeFormatterBuilder()
.appendPattern("uuuu[-MM[-dd['T'HH[:mm[:ss]]]]]")
.parseDefaulting(ChronoField.MONTH_OF_YEAR, 12)
.parseDefaulting(ChronoField.DAY_OF_MONTH, 31)
.parseDefaulting(ChronoField.HOUR_OF_DAY, 23)
.parseDefaulting(ChronoField.MINUTE_OF_HOUR, 59)
.parseDefaulting(ChronoField.SECOND_OF_MINUTE, 59)
.parseDefaulting(ChronoField.NANO_OF_SECOND, 999_999_999)
.toFormatter(Locale.ROOT);
I have specified 31 as the day of month to pick when no day of month is in the string. For February 2025, which has 28 days, this gives — 2025-02-28T23:59:59.999999999, so the last day of February. It seems that java.time is smart enough to just pick the last day of the month.
Full demonstration:
String[] inputs = { "2025", "2025-01", "2025-02", "2025-01-15", "2025-01-15T09", "2025-01-15T09:15" };
for (String input : inputs) {
LocalDateTime localDateTime = LocalDateTime.parse(input, formatter);
System.out.format(Locale.ENGLISH, "%16s -> %s%n", input, localDateTime);
}
Output:
2025 -> 2025-12-31T23:59:59.999999999
2025-01 -> 2025-01-31T23:59:59.999999999
2025-02 -> 2025-02-28T23:59:59.999999999
2025-01-15 -> 2025-01-15T23:59:59.999999999
2025-01-15T09 -> 2025-01-15T09:59:59.999999999
2025-01-15T09:15 -> 2025-01-15T09:15:59.999999999
You said in a comment:
Yeah, but for simplification of the question lets truncate this Dates to the ChronoUnit.SECONDS and without zones :)
So just leave out .parseDefaulting(ChronoField.NANO_OF_SECOND, 999_999_999)
.
2025 -> 2025-12-31T23:59:59
2025-01 -> 2025-01-31T23:59:59
2025-02 -> 2025-02-28T23:59:59
2025-01-15 -> 2025-01-15T23:59:59
2025-01-15T09 -> 2025-01-15T09:59:59
2025-01-15T09:15 -> 2025-01-15T09:15:59
regedit
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
LongPathsEnabled
to 1
Did you find a solution to this issue? If so, how did you resolve it?
For future reference,
It seems using Remote config from a python server environment is possible now.
Use Remote Config in server environments - Python
This is such a clear answer! I created an account just to say thank you!
It seems that QtCreator can be a little less than ... intuitive?
This behaviour is baffling for macOS users. Thankfully there are Settings options to tell VS Code to use the global macOS find clipboard: search for "global clipboard" in Settings and check both
Editor > Find: Global Find Clipboard
Search > Global Find Clipboard
can't separate by semicolons as I want output plots in the notebook. it is so typical of computer or software responses. a guy asks a question, and no one gives a direct answer. instead, the guy gets a lecture on some view of how the world should be.
I have the same bug, horizontal lines are not showing !
I'm searching since 2hours now.
Use all small-case letters:
\> pip install simpleitk
I had to add a wildcard to make child routes work:
export const serverRoutes: ServerRoute[] = [
{
path: 'public/*',
renderMode: RenderMode.Server,
},
{
path: '**',
renderMode: RenderMode.Client
}
];
Do you remember, that you must use different "apiKey"/"secrets" sets for demo-treading and real-trading mode?
I know thats not the case here for this issue but sometimes the OS itself can hold on to that port for some reason, and because of that theres no PID attached to the port on the results of netstat
or lsof
commands
So basically you need to restart the port
sudo fuser -k 3000/tcp
Late to the party here but VS 2022 has a Errors and warnings section. I added to the Suppress specific warnings section. Problem solved. I tried changing the warning level but it didn't change anything.
export BUNDLE_GITHUB__COM=<your-token-goes-here>
After that, just run the build.
Yes, you could export data to word doc. Here is the code:
import statsmodels.api as sm
import numpy as np
import pandas as pd
from docx import Document
np.random.seed(123)
data = pd.DataFrame({
'X1': np.random.randn(100),
'X2': np.random.randn(100),
'Y': np.random.randint(0, 2, 100)
})
model = sm.Logit(data['Y'], sm.add_constant(data[['X1', 'X2']])).fit()
summary = model.summary2().tables[1]
doc = Document()
doc.add_heading('Regression Results', level=1)
table = doc.add_table(rows=summary.shape[0] + 1, cols=summary.shape[1])
table.style = 'Table Grid'
for j, col_name in enumerate(summary.columns):
table.cell(0, j).text = col_name
for i in range(summary.shape[0]):
for j in range(summary.shape[1]):
table.cell(i + 1, j).text = str(round(summary.iloc[i, j], 4))
doc.save("Regression_Results.docx")
print("Regression table successfully exported to 'Regression_Results.docx'")
Output:
PortableApps https://portableapps.com/ works with Windows XP 32-bit and 64-bit, and there are often legacy apps for XP 32-bit. If you search for apps using PortableApps on the computer you should only find apps that works with this computer.
solution by Padi Amu worked for me
ERROR at line 1:
ORA-01702: a view is not appropriate here
You cannot create an Index on a view
I was having the exact same problem and creating my access token with read_api, read_registry and read_repository permissions fixed the problem.
Add this to your app.tsx and it will load the new page then scroll to the top.
useEffect(() => {
window.scrollTo(0, 0);
}, [location.pathname]); // Trigger the effect when the pathname changes
DuckDB now proposes the following duckdb documentation page :
select array_to_string(['1', '2', 'sdsd'], '/')
which is a more compact way to write:
SELECT list_aggr(['1', '2', 'z'], 'string_agg', '-')
nb: it only works if all elements are of the same type
About FFTW3
CMake do not providde a builtin find module
ref: https://cmake.org/cmake/help/latest/manual/cmake-modules.7.html
FFTW provide a CMake based build
ref: https://github.com/FFTW/fftw3/blob/master/CMakeLists.txt
Unfortunately, Homebrew formulae use the autotool build system
ref: https://github.com/Homebrew/homebrew-core/blob/dda17fba84547fddbcb4206679794de4b0852071/Formula/f/fftw.rb
ref: https://formulae.brew.sh/formula/fftw#default
So you have to rely on providing your own find module...
After investigation on module traceback, I found my solution :
traceback.extract_stack()
And analyse list in many level (around 10) and I have found file and line of import.
It looks like the /#
of the url in the port and path check causes the problem. You can add #
to your check to fix it.
if you are trying to deploy form Oracle cloud shell, please make sure you select GENERIC_ARM shape, because the arch of the cloud shell machine is ARM.
would
@Async
still allow to make parallel call if you are not using any Future interface and return instead String object?
If any package reference is present under project reference that should not be present under package in test project dependency and the package version should be same as the project package version.
Spent lot of time and discovered the solution trying different options.
Thanks.
For Ubuntu, maybe another Linux users.
I tried many solutions within this post, and they didn't even work on VSCode terminal, either because I am using Linux, or simply the command didn't output something. I tried the KISS approach and do it from terminal, instead of VSCode terminal, it requested username, which was the normal one for Github and the password, which is the classic token I have already generated, with it, the push request worked fine, and then, also worked as well on VSCode, most probably the IDE was having issues to authenticate me on Github, but as the terminal did, then everything worked out fine and keeps doing it.
Not the most clever approach, but it solved the authentication issue after 10 minutes of trying several answers within this post.
Had the same Problem and the quartz scheduler dependency pulled in an incompatible jar. when the IDE setup the classpath these got mixed up and the instantiation failed due to the missing method.
Either upgrade/exclude the old dependency or let a build system like gradle or maven manage your classpath to exclude dependencies that differentiate classpaths for runtime and compiletime
Thank you very much - that is exactly what I was looking for.
I have a Blazor Hybrid Maui app that uses websocketclient. When user navigates away from the page, I need to gracefully close the connection. I use OnLocationChanged(). That should work for you as well.
private void OnLocationChanged(object? sender, LocationChangedEventArgs e)
{
if (e.IsNavigationIntercepted)
{
// Handle the page closing event here
ws.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, "Page Closing", CancellationToken.None);
}
}
Any update on this? Having exactly the same issue...
1.Maybe you should check if the Redirect URLs in supabase are set correctly.
2.Check the web domain name setting of Service ID in Apple. The supabase documentation says that you need to add <project-id>.supabase.co
via: Login with Apple > Using the OAuth flow for web > Configuration
Configure Website URLs for the newly created Services ID. The web domain you should use is the domain your Supabase project is hosted on. This is usually .supabase.co while the redirect URL is https://.supabase.co/auth/v1/callback.
Maintainer here. It is really courious, it seems to be a problem with the C++ core function of division operator. I promise to check it and say you something. Thanks!
Worked for me on node-alpine:16 by adding and running:
apk add gcompat
I know not answering the specific question as such but if you want to automatically populate a spreadsheet with the ISO week number then set the value in the cell to the google sheet function =WEEKNUM(TODAY(),21) in your App Script. A workaround for google sheets.
cell.setValue("=WEEKNUM(Today(),21)");
Use the --async flag in your web app deploy command.
This will cause the command to poll for the status as opposed to waiting. The Azure LB will timeout after 230 seconds (link) so if your job is expected to take longer than that, use the --async flag.
Why not use ODBC insted of jdbc?
Finally, the solution was to enclose the TOSTMF parameter in a double apostrophe TOSTMF(''/REPORTS/xxxxxx.PDF'')
CALL QSYS2.QCMDEXC('CPYSPLF FILE(AUMENTO1) TOFILE(*TOSTMF) JOB(557767/RAB/AUMENTO1) SPLNBR(1) TOSTMF(''/REPORTES/xxxxxx.PDF'') STMFOPT(*REPLACE) WSCST(*PDF))'
Just checking if you got any solution for this.. Please share if you found one. Thanks
Recovery Companies of Jeff on TiKTOK is the best hacking organisation
For all social media hacking and recovery
For macOS it's Cmd + Shift + V for preview in a new tab and Cmd + K (lift both keys) and then V for side-by-side preview.
See VSCode Markdown Guide for more info.
The onto:measurement in GraphDB reports the values in milliseconds. In this system, the total time for an event is the sum of all individual timing intervals recorded for that event. For cases with nested measurements, a net execution time is computed by subtracting the time spent in nested events from the total time.
For further technical details on onto:measurement, please refer to official GraphDB documentation, which is regularly enriched with additional information:
https://graphdb.ontotext.com/documentation
guyes don't do it because you will losse your data
The Solution i found working is to remove the connections of OpenGIS and add conda and place conda on top for the python bindings, currently conda works but to connect with geoserver i need to add the PYNOPATH variable manually everytime and delete it when i want to use normal python installed in the windows for daily tasks.
I killed the task running at that port (in my case my react app at localhost:3000) using: npx kill-port 3000
and restarted my app as usual.
Since this question existed, I have repeatedly occupied myself with it, because I am also very interested in it. I have gained support here. https://discourse.gnome.org/t/gtk4-screenshot-with-gtksnapshot/27981/3?u=holger The problem is to get a "own" GtkSnapshot, which can then be converted into a GskTexture and printed out accordingly. It is important that the GtkWidget is already drawn.
In this example, I was guided by a contribution from April 2020. https://blog.gtk.org/
#include<gtk/gtk.h>
/* This is the important part */
void demo_snapshot(GtkWidget *widget, GtkSnapshot *snapshot);
void custom_snapshot(GtkWidget *widget) {
GtkSnapshot *snapshot = gtk_snapshot_new();
demo_snapshot(widget, snapshot);
int w = gtk_widget_get_width(widget);
int h = gtk_widget_get_height(widget);
GskRenderNode *node = gtk_snapshot_free_to_node (snapshot);
GskRenderer *renderer = gtk_native_get_renderer (gtk_widget_get_native (widget));
GdkTexture *texture = gsk_renderer_render_texture (renderer,
node,
&GRAPHENE_RECT_INIT (0, 0, w,h ));
gdk_texture_save_to_png (texture, "screenshot.png");
};
/* From here the GtkWidget to be printed begins*/
#define MY_TYPE_WIDGET (my_widget_get_type())
G_DECLARE_FINAL_TYPE (MyWidget, my_widget, MY, WIDGET, GtkWidget)
GtkWidget *my_widget_new();
/*********************************/
struct _MyWidget
{
GtkWidget parent_instance;
};
struct _MyWidgetClass
{
GtkWidgetClass parent_class;
};
G_DEFINE_TYPE(MyWidget, my_widget, GTK_TYPE_WIDGET)
void
demo_snapshot (GtkWidget *widget, GtkSnapshot *snapshot)
{
GdkRGBA red, green, yellow, blue;
float w, h;
gdk_rgba_parse (&red, "red");
gdk_rgba_parse (&green, "green");
gdk_rgba_parse (&yellow, "yellow");
gdk_rgba_parse (&blue, "blue");
w = gtk_widget_get_width (widget) / 2.0;
h = gtk_widget_get_height (widget) / 2.0;
gtk_snapshot_append_color (snapshot, &red,
&GRAPHENE_RECT_INIT(0, 0, w, h));
gtk_snapshot_append_color (snapshot, &green,
&GRAPHENE_RECT_INIT(w, 0, w, h));
gtk_snapshot_append_color (snapshot, &yellow,
&GRAPHENE_RECT_INIT(0, h, w, h));
gtk_snapshot_append_color (snapshot, &blue,
&GRAPHENE_RECT_INIT(w, h, w, h));
}
static void click_cb (GtkGestureClick *gesture,
int n_press,
double x,
double y)
{
GtkEventController *controller = GTK_EVENT_CONTROLLER (gesture);
GtkWidget *widget = gtk_event_controller_get_widget (controller);
custom_snapshot(widget);
if (x < gtk_widget_get_width (widget) / 2.0 &&
y < gtk_widget_get_height (widget) / 2.0)
g_print ("Rot!\n");
else if (x > gtk_widget_get_width (widget) / 2.0 &&
y > gtk_widget_get_height (widget) / 2.0)
g_print ("Blau!\n");
else if (x > gtk_widget_get_width (widget) / 2.0 &&
y < gtk_widget_get_height (widget) / 2.0)
g_print ("Grün!\n");
else if (x < gtk_widget_get_width (widget) / 2.0 &&
y > gtk_widget_get_height (widget) / 2.0)
g_print ("Gelb!\n");
};
void
demo_measure (GtkWidget *widget,
GtkOrientation orientation,
int for_size,
int *minimum_size,
int *natural_size,
int *minimum_baseline,
int *natural_baseline)
{
*minimum_size = 100;
*natural_size = 200;
};
static void my_widget_dispose(GObject *gobject)
{
MyWidget *self = MY_WIDGET(gobject);
G_OBJECT_CLASS (my_widget_parent_class)->dispose (gobject);
};
static void my_widget_class_init (MyWidgetClass *class)
{
G_OBJECT_CLASS(class)->dispose = my_widget_dispose;
GtkWidgetClass *widget_class = GTK_WIDGET_CLASS (class);
// hier kein LayoutManager notwendig
widget_class->snapshot = demo_snapshot;
widget_class->measure = demo_measure;
};
static void my_widget_init (MyWidget *self)
{
GtkGesture *controller = gtk_gesture_click_new ();
g_signal_connect_object (controller, "pressed",
G_CALLBACK (click_cb), NULL,G_CONNECT_DEFAULT);
gtk_widget_add_controller (GTK_WIDGET(self), GTK_EVENT_CONTROLLER(controller))
};
GtkWidget *my_widget_new()
{
MyWidget *self;
self = g_object_new(MY_TYPE_WIDGET,NULL);
return GTK_WIDGET (self);
};
/************************************************************/
static void activate (GtkApplication *app, gpointer user_data)
{
GtkWidget *window;
window = gtk_application_window_new(app);
GtkWidget *widget = my_widget_new();
gtk_window_set_child(GTK_WINDOW(window),GTK_WIDGET(widget));
gtk_window_present(GTK_WINDOW (window));
};
int main (int argc, char **argv)
{
GtkApplication *app;
int status;
app = gtk_application_new("org.gtk.mywidget",
G_APPLICATION_DEFAULT_FLAGS);
g_signal_connect(app, "activate", G_CALLBACK(activate),NULL);
status = g_application_run (G_APPLICATION(app), argc, argv);
g_object_unref(app);
return status;
}
Have fun trying.
The cc and bcc might still be null or an empty string (""
) when being passed, even though you assigned them as an array. Modify the constructor assignment to always ensure an array.
$this->cc = is_array($cc) ? $cc : [];
$this->bcc = is_array($bcc) ? $bcc : [];
As mentioned in the comments it's a bit rude to ask others to do your job, when you haven't even tried yourself.
The following pattern should work:
Pattern:
_[a-z]+\.[0-9]+
Replace with:
_inf$0
Explanation:
"_" matches the underscore literal so we find the start of the pattern we're looking for
"[a-z]+" matches any lowercase letter any number of times (there is no straightforward way to match exactly 3 letters)
"\." matches the dot literal (escaped by a backslash because the dot is a regex expression itself)
"[0-9]+" matches any digit any number of times (so it can occur 3 or 4 times and still gets matched)
_inf$0 replaces the matched pattern with "_inf" plus the matched pattern itself
You can copy files from external stages to internal stages in snowflake now: (without Python Stored Procedures)
COPY FILES INTO @[<namespace>.]<stage_name>[/<path>/]
FROM @[<namespace>.]<stage_name>[/<path>/]
[ FILES = ( '<file_name>' [ , '<file_name>' ] [ , ... ] ) ]
[ PATTERN = '<regex_pattern>' ]
[ DETAILED_OUTPUT = { TRUE | FALSE } ]
I came across this situation in Visual Studio 2019. The reason for this behavior was the "Optimize code" flag on the Build tab. Some information about this can be found here:
https://learn.microsoft.com/en-us/visualstudio/debugger/project-settings-for-csharp-debug-configurations
Unless a bug appears only in optimized code, leave this setting deselected for Debug builds. Optimized code is harder to debug, because instructions do not correspond directly to statements in the source code.
I am also looking for the same information.
I am trying to convert the following HF model https://huggingface.co/nickypro/tinyllama-15M/tree/main tokenizer.json into tokenizer.model inorder to run Karpathy's llama2.c - https://github.com/karpathy/llama2.c/blob/master/doc/train_llama_tokenizer.md.
I tried the following steps:
1. Extract vocabulary from tokenizer.json
2. Train the sentencepiece tokenizer using spm_train with the extracted vocabulary (vocab_size = 32000). This generates tokenizer.model
3. Use tokenizer.py to convert the tokenizer.model to tokenizer.bin.
Even though the above steps were successful, the inference resulted in gibberish. I assume this has something to do with the tokenizer.model that was generated. If anyone could assist with this, it would be really helpful.
Yes i connected. now i want to connect the mysql database in the live server . i have ip but the error will come
org.hibernate.exception.JDBCConnectionException: unable to obtain isolated JDBC connection [Communications link failure
please give the solution ..
my application.properties
# Application Name
spring.application.name=product
# Remote Database Configuration
spring.datasource.url=jdbc:mysql://My_IP:3306/test
spring.datasource.username=root
spring.datasource.password=my_password
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
# Hibernate & JPA Settings
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
spring.jpa.database-platform=org.hibernate.dialect.MySQL8Dialect
Thanks @JérômeRichard and everyone. I misprinted in for-loops conditions. Should be x
x <=limit
and y*y
<= limit
instead of x <= limit
and y <= limit
. Everything else (dynamic array, array data type, I/O stream and etc.) plays zero role in my case. Now all primes up to 10Millions are found within ~2-3 seconds. I made such mistake because I thought I transferred code from PHP clearly and my eyes just 'skipped' this moment every time I reviewed this part of C++ code. I am not lazy, but my brain is =)
P.S. I made all necessary corrections to C++ code in my Question.
use debug statement to print your api token and then use ask again here
On mobile devices you must use "ontouchstart" instead of "onclick" to start playback without error.
This error typically occurs due to permission issues with npm and Git. Here’s how you can resolve it:
1. Check Git Installation Ensure Git is installed and accessible:
git --version If not installed, install it using:
sudo apt update && sudo apt install git
2. Fix Permission Issues Run the following to take ownership of the npm directory:
sudo chown -R $(whoami) ~/.npm sudo chown -R $(whoami) /usr/lib/node_modules
3. Clear NPM Cache Try clearing the npm cache:
npm cache clean --force
4. Use Node Version Manager (NVM) Instead If you installed Node.js via sudo apt install nodejs, it may cause permission issues. Using NVM can prevent this:
curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.4/install.sh | bash source ~/.bashrc nvm install node nvm use node Then reinstall MERN CLI without sudo:
npm install -g mern-cli
5. Try Installing Without Global Flag If the issue persists, install MERN CLI locally:
npx mern-cli
If you still face problems, consider hiring expert MERN stack developers from GraffersID to streamline your development process and avoid technical hurdles!
Using catch we can get the message.
set res [catch {exec ./a.out} msg ]
puts "$msg"
Works with RTX 4090 and i9 14HX don't know for others
Similar to some other answers, but this is much easier:
open a finder window,
type the app name into the search box, and
delete all the files that show up. (Sometimes under Debug folder, Release folder, and/or codesign folder)
try reinstall the app using the pkg file.
Another option is to open that file with tools like https://app.packetsafari.com/ or https://www.qacafe.com/analysis-tools/cloudshark/
I second the previous answer. Building with release configuration solves the connection issue. Wish I had vound this earlier. At least I can move on for now.
I don't think this answer is correct according to the latest Google Cloud documentation:
If you are migrating from HBase to Bigtable or your application calls the HBase API, use the Bigtable HBase Beam connector (CloudBigtableIO) discussed on this page.
In all other cases, you should use the Bigtable Beam connector (BigtableIO) in conjunction with the Cloud Bigtable client for Java, which works with the Cloud Bigtable APIs. To get started using that connector, see Bigtable Beam connector.
Resolve by setting 'hoodie.write.lock.provider' = 'org.apache.hudi.client.transaction.lock.InProcessLockProvider'
Had this same issue too, copy the .gitignore
file to another location, delete the exisiting cache
folder, create a new one, move the .gitignore
file back in it, then run php artisan optimize:clear
, that fixed it for me :)
Well, I should probably wait before I post since I found a solution shortly after. Anyone interested, here is a solution that works:
Source: https://github.com/microsoft/vscode-python/issues/6986#issuecomment-581960186
{
"version": "0.2.0",
"configurations": [
{
"name": "Run project in Debug Mode",
"type": "debugpy",
"request": "launch",
"program": "Main.py",
"console": "integratedTerminal",
"env": {
"PYTHONDONTWRITEBYTECODE": "1"
}
}
]
}
There is an open-source tool that can read FHIR bundles and transform FHIR resources into various flat structures (such as CSV, relational database tables, or JSON arrays). You can define mappings and work with different file types easily.
The error message you're seeing, SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data, usually means that the response you're getting isn't valid JSON. Here are a few things you can check:
Check the Response: Use your browser's developer tools to inspect the network request for the reCAPTCHA verification. Look at the response to see if it's returning an error message instead of valid JSON.
API Keys: Make sure that the new reCAPTCHA keys are correctly set up in your code. Double-check that you're using the right keys for the environment (development vs. production).
Server-Side Validation: Ensure that your server-side code is correctly handling the reCAPTCHA response. If there's an issue with how the server processes the response, it might not return valid JSON.
Check for Downtime: While it's rare, you can check if Google's reCAPTCHA service is experiencing any outages. You can look for status updates on their official status page or forums.
CORS Issues: If you're making requests from a different domain, ensure that your server is set up to handle CORS (Cross-Origin Resource Sharing) properly.
If you've checked all these and it still doesn't work, it might be helpful to consult the reCAPTCHA documentation or forums for more specific troubleshooting steps. Good luck!
I tried to Make puppeteer browser version but I failed
the article says to clone the puppeteer repo, install the dependencies and to build with rollup or webpack.
There are a couple of things I am not sure about
should I install the puppeteer repo in my project or ouside of my project ?
should I build my app with rollup or add a rollup build script to the puppeteer project after I cloned it ?
I don't know if my approach is right but I decided to clone puppeteer into my project and build everything with rollup and use my main.ts as an entry point for the rollup build.
Here is my concern : Why would I ever use vite and rollup for the same app ? Isn't one useless at this point ?
This is a bug and it has been reported.
This bug is triggered when IntentlessPolicy is used AND there is no e2e tests. If you want to continue using IntentlessPolicy there is a workaround:
Create an e2e test, here is the link to docs how to do it: Evaluating Your Assistant (E2E Testing) | Rasa Documentation.
I have faced same issue on windows,
I fixed that from
delete index.lock file from your C:\Users\YourUserName\.git
It looks like your SQL tool is scanning for substitution variables. You can usually escape the ampersand with a backslash or turn off scanning. In GoldSqall this can be done in the options or by using a "set scan off;" command in script. Your tool probably has something similar.
I don't think so.
Source : https://discussions.unity.com/t/can-i-distribute-unitys-dll-with-an-open-source-project/218013.
No, you can not. The UnityEngine.dll and other modules are part of the Unity engine. They can be distributed with a game build with the Unity engine. You are no allowed to use of distribute parts of the engine outside a Unity project.
Apart from that most things you find inside the UnityEngine.dll requires the native core of the Unity engine as many parts of the UnityEngine.dll are just wrapper classes which refer to externally defined methods. This is not only true for the various component classes but also parts of the strucs (like Vector2/3/4, Matrix4x4, Quaternion, …).
I don’t think there’s anything inside the UnityEngine.dll that would be worth using outside of Unity. If it’s something simple it’s probably easier to implement the functionality yourself. If it’s more complex it most likely depends on the engine core anyways. What exatly are you using from those assemblies?
Said by Bunny83
You can also use @PostConstruct as well...
@Service
class ServerConfigurationService(
private val serverConfigurationRepository: ServerConfigurationRepository
) {
private val logger = LoggerFactory.getLogger(ServerConfigurationService::class.java)
fun createServerConfiguration() {
if (serverConfigurationRepository.count() == 0L) {
try {
serverConfigurationRepository.save(ServerConfiguration())
} catch (e: Exception) {
logger.error("Error while creating server configuration: ${e.message}", e)
}
} else {
logger.info("Server configuration already present")
}
}
@PostConstruct
fun init() {
logger.info("Initializing server configuration...")
createServerConfiguration()
}
}
The PIL library is not the best choice for editing EXIF metadata because it recompresses the image when saving. This means that after multiple modifications of a JPEG image, compression artifacts may appear.
A better alternative is the exiv2 library. It is well-documented, production-ready, and modifies only the metadata without recompressing the image, preserving its original quality.
If you want to customise the icon you should change the following props:
<DataGridPro
slots={{
detailPanelExpandIcon: CustomExpandIcon,
detailPanelCollapseIcon: CustomCollapseIcon,
}}
/>
To restrict users on an OTT platform and limit simultaneous streams per account (e.g., 3 streams per user), you need a concurrent stream management system integrated into your online video platform. Here’s how you can achieve this:
Implement Multi-Device Login Control Use session management to track active logins and enforce device restrictions per account. If a user exceeds the limit, they must log out from another device.
Concurrent Stream Restriction Deploy a stream concurrency control feature that allows only a set number of active streams per account. Once the limit is reached, additional streams are blocked.
Token-Based Authentication Secure access by using unique streaming tokens for each session. When a user tries to exceed the allowed limit, the system denies token generation for the extra session.
DRM-Based Access Management Integrate Digital Rights Management (DRM) solutions like Vplayed, Widevine, FairPlay, or PlayReady to enforce playback restrictions based on user profiles.
I tried every suggestion above but nothing works for me. I am on Windows 11 and it is super frustrating that I am stuck with this and I needed to run and test Google Auth (react-native-google-signin).
Below is full log:
I tried every suggestion above but nothing works for me. I am on Windows 11 and it is super frustrating that I am stuck with this and I needed to run and test Google Auth (react-native-google-signin).
Below is full log:
D:\test-app>npx expo run:android
› Building app...
WARNING: A restricted method in java.lang.System has been called
WARNING: java.lang.System::load has been called by net.rubygrapefruit.platform.internal.NativeLibraryLoader in an unnamed module (file:/C:/Users/Saad/.gradle/wrapper/dists/gradle-8.10.2-all/7iv73wktx1xtkvlq19urqw1wm/gradle-8.10.2/lib/native-platform-0.22-milestone-26.jar)
WARNING: Use --enable-native-access=ALL-UNNAMED to avoid a warning for callers in this module
WARNING: Restricted methods will be blocked in a future release unless native access is enabled
Configuration on demand is an incubating feature.
> Configure project :app
ℹ️ Applying gradle plugin 'expo-dev-launcher-gradle-plugin' ([email protected])
> Configure project :expo
Using expo modules
- expo-asset (11.0.4)
- expo-blur (14.0.3)
- expo-constants (17.0.8)
- expo-dev-client (5.0.15)
- expo-dev-launcher (5.0.31)
- expo-dev-menu (6.0.21)
- expo-file-system (18.0.11)
- expo-font (13.0.4)
- expo-haptics (14.0.1)
- expo-json-utils (0.14.0)
- expo-keep-awake (14.0.3)
- expo-linking (7.0.5)
- expo-manifests (0.15.7)
- expo-modules-core (2.2.3)
- expo-splash-screen (0.29.22)
- expo-system-ui (4.0.8)
- expo-web-browser (14.0.2)
> Configure project :react-native-reanimated
Android gradle plugin: 8.6.0
Gradle: 8.10.2
> Task :react-native-reanimated:configureCMakeDebug[arm64-v8a] FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':react-native-reanimated:configureCMakeDebug[arm64-v8a]'.
> WARNING: A restricted method in java.lang.System has been called
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
For more on this, please refer to https://docs.gradle.org/8.10.2/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
BUILD FAILED in 36s
593 actionable tasks: 228 executed, 350 from cache, 15 up-to-date
Error: D:\test-app\android\gradlew.bat app:assembleDebug -x lint -x test --configure-on-demand --build-cache -PreactNativeDevServerPort=8081 -PreactNativeArchitectures=x86_64,arm64-v8a exited with non-zero code: 1
Error: D:\test-app\android\gradlew.bat app:assembleDebug -x lint -x test --configure-on-demand --build-cache -PreactNativeDevServerPort=8081 -PreactNativeArchitectures=x86_64,arm64-v8a exited with non-zero code: 1
at ChildProcess.completionListener (D:\test-app\node_modules\@expo\spawn-async\src\spawnAsync.ts:67:13)
at Object.onceWrapper (node:events:629:26)
at ChildProcess.emit (node:events:514:28)
at ChildProcess.cp.emit (D:\test-app\node_modules\cross-spawn\lib\enoent.js:34:29)
at maybeClose (node:internal/child_process:1105:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:305:5)
...
at spawnAsync (D:\test-app\node_modules\@expo\spawn-async\src\spawnAsync.ts:28:21)
at spawnGradleAsync (D:\test-app\node_modules\@expo\cli\src\start\platforms\android\gradle.ts:134:28)
at assembleAsync (D:\test-app\node_modules\@expo\cli\src\start\platforms\android\gradle.ts:83:16)
at runAndroidAsync (D:\test-app\node_modules\@expo\cli\src\run\android\runAndroidAsync.ts:48:24)
Currently it is impossible to customize this redirect url. It is hardcoded here: https://github.com/keycloak/keycloak/blob/7992529e4a169fb200bd583f5776a8023f1df591/services/src/main/java/org/keycloak/services/resources/admin/UserResource.java#L420
I've created a video that explain a way around this, let me know if this helps
For embedded systems, that have limited resources, installing a package manager could be not convenient.
Do you have the toolchain that was used to cross compile Linux for your machine?
If this is the case, I would recommend you to build Mosquitto inside it, so that you find it integrated in your system, hopefully without compatibility problems.
Is the above mentioned issue resolved?
but when I try to add caps like this:
radosgw-admin user create --uid=superadmin --display-name="Admin User" --system
radosgw-admin caps add --uid=superadmin --caps="users=*;buckets=*;metadata=*;usage=*;zone=*"
I cant then list all the buckets with s3cmd ls, for example, or with python boto3 framework