This solved it for me:
create a new form, go to MyProject change the application framework to something else, then change it back, then select the new form..
In ASP.NET web applications, I have found the null-forgiving operator to be useful for defining navigation properties, as in the following example.
Suppose we have two related tables, Course and Teacher.
In the definition of the Course class, we could have something like this:
public int TeacherID { get; set; }
public Teacher Teacher { get; set; } = null!;
The assignment in the second line, using the null-forgiving operator, allows Course.Teacher to be null in C# code, but not in the database, and that can be very useful.
Is there a better way to achieve the same effect?
you join tables with wrong columns (you need to use GenreID, not Name)
SELECT
Track.Name AS TrackName,
Track.GenreId,
Track.Composer,
Genre.Name AS GenreName
FROM Track
INNER JOIN Genre ON Track.GenreId = Genre.GenreId
WHERE
Track.GenreId IN (1, 3, 4, 5, 23, 9)
ORDER BY Track.Name;
How is LIBRARY_SOURCE defined? In the Doxygen configure file, you can set it via PREDEFINED:
PREDEFINED = LIBRARY_SOURCE
You can also check whether the documentation shows up if preprocessing is disabled (enabled by default):
ENABLE_PREPROCESSING = NO
It appears that the PATH system variable is too long.
I suggest you manually system environment variables and correct anything that looks abnormal (e.g., repetitions).
The sitemap looks accessible and valid, but Google may reject it if the server isn't returning the correct Content-Type (should be application/xml). Also check for redirects, HTTP/HTTPS inconsistencies, or robots.txt blocks. Sometimes Search Console delays processing — try again after 24–48 hours.
Was using Bootstrap 5 and jQuery 1.12 in my project. Upgrading jQuery version from 1.12 to 3.7 fixed my issue.
I hope the issue has been fixed by now 😂, but for any developer looking up on this issue, please remember to call glGetError() in a loop and make sure it is clear, as the function returns one error at a time, while functions can output more than one.
It is hard to deduce what's going on since OpenGL is highly contextual, more code is needed regarding the current state of things in the GL context.
I do strongly recommend to use a wrapper function for glGetError, or even move to glDebugMessageCallback.
static void GLClearErrors() {
while (glGetError() != GL_NO_ERROR);
}
static void GLCheckErrors() {
while (GLenum error = glGetError()) {
std::cout << "[OpenGL error] " << error << std::endl;
}
}
it is the little functions like these that ensure safety in your code, simple yet darn useful.
for .Net 8 you should add package 'Microsoft.Extensions.Hosting.WindowsServices' and add UseWindowsService(). It's activate only if it detects the process is running as a Windows Service.
IHost host = Host.CreateDefaultBuilder(args)
.UseWindowsService()
....
.Build();
await host.RunAsync();
I found a solution, how to make proper IN clauses in case somebody needs to search based on multiple values in a field that is from a PosgreSQL type as the John Williams's solution works BUT on Varchar field
return jdbcClient.sql("""
SELECT *
FROM configuration
WHERE status IN (:status)
""")
.param("status", request.configurationStatus().stream().map(Enum::name).collect(Collectors.toList()), OTHER)
.query((rs, rowNum) -> parseConfiguration(rs))
.list();
The key thing is that the third parameter should be used, which defines the SQL type
In my case I used OTHER (Call type can be seen from java.sql.Types)

since you're the package author maybe you can tell me if I can adjust the estimation window. As I understand the package description, all the data available before the event date is used for the estimation.
"estimation.period: If “type” is specified, then estimation.period is calculated for each firm-event in “event.list”, starting from the start of the data span till the start of event period (inclusive)."
That would lead to different length of the estimation depending on the event date. Can I manually change this (e.g estimation window t:-200 until t:-10)?
Did you ever find a solution for this? I'm having the same problem, container seems to be running infinitely and I want it to be marked as "success" so the next tasks can move on.
Thanks to siggermannen and Dan Guzman I made it to the following query:
use [OmegaCA_Benchmark]
select
a.database_specification_id,
a.audit_action_id, a.audit_action_name,
a.class, a.class_desc,
a.major_id,
object_schema_name =
CASE
WHEN a.class_desc = 'OBJECT_OR_COLUMN' THEN OBJECT_SCHEMA_NAME(a.major_id)
ELSE NULL
END,
object_name =
CASE
WHEN a.class_desc = 'OBJECT_OR_COLUMN' THEN OBJECT_NAME(a.major_id)
WHEN a.class_desc = 'SCHEMA' THEN SCHEMA_NAME(a.major_id)
WHEN a.class_desc = 'DATABASE' THEN 'OmegaCA_Benchmark'
ELSE NULL
END,
a.minor_id,
a.audited_principal_id, c.name as Principal_Name,
a.audited_result,
a.is_group,
b.name as DB_Aud_Spec_Name,
b.create_date, b.modify_date,
b.audit_guid,
b.is_state_enabled
from sys.database_audit_specification_details a
inner join sys.database_audit_specifications b
on a.database_specification_id = b.database_specification_id
inner join sys.database_principals c
on a.audited_principal_id = c.principal_id
best regards
Altin
import React from 'react'; import { motion } from 'framer-motion'; import { Card } from '@/components/ui/card'; import './styles.css';
const HardixFFIntro = () => { return ( <motion.div initial={{ opacity: 0 }} animate={{ opacity: 1 }} transition={{ duration: 3 }} className="smoke-bg" /> <motion.img src="/mnt/data/file-Gkk3FLHg8Uaa2FJ1CcVHZD" alt="Hardix.FF Logo" initial={{ scale: 0.8, opacity: 0 }} animate={{ scale: 1, opacity: 1 }} transition={{ duration: 2, delay: 1 }} className="logo-img" /> <motion.h1 initial={{ y: 100, opacity: 0 }} animate={{ y
Looking at it, the only thing it triggers me would be the dataloader...
But if it work with the other models, would work with this too.
Can you share your dataloader code?
bro just use border-spacing
table.that-has-your-th {
border-spacing: 9px 9px 9px 8px;
}
Old Post but I had the same issue. We had to install ReqnRoll to replace spekflow and this stopped working for me at one point. I looked everywhere and event reinstalled reqnroll based on recommendations but that still didn't work.
I finally reinstalled: https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/runtime-8.0.14-windows-x64-installer and it started working again.
Hopefully this solution helps someone.
The binlog mode of logproxy should be used with flink-mysql-cdc, which is equivalent to treating observer + logproxy + obproxy as a mysql instance. In this way, the connection information uses jdbc (that is, connecting to obproxy).
Reference to https://nightlies.apache.org/flink/flink-cdc-docs-release-3.1/docs/connectors/flink-sources/mysql-cdc/
It is recommended to use the latest version 3.1.1.
I rcommend https://min.io/docs/minio/linux/reference/minio-mc.html, which is well maintained these days.
The issue seemed to be with the matching clause I was using in the code, which was omitted here in the example as I thought it was not the issue. I was matching using a string id instead of an ObjectId. I thought this was not the issue because it seems these string ids work when querying through various methods.
Hope answer given in below link will help to resolve your issue.
Angular 18, VS Code 1.95.2, after ng serve, hitting F5 starts the browser and spins indefinitely
No this behavior is not specified in any standard it is just how browsers interpretate the pre tag . Although as per HTML standard it the line break should be there but the browsers interpretate it like this.
public static function matchesPatternTrc20(string $address) : bool
{
return boolval(preg_match('/^T[1-9a-zA-Z]{33}$/', $address));
}
So, after some more time/issues, I figured that my phpunit/phpunit bundle version was to old (9.*) and so I updated it to works with dama/doctrine-test-bundle (need phpunit version > 10).
But in the end, I removed dama/doctrine-test-bundle and used hautelook/alice-bundle.
I had to add this code in my /test/bootstrap.php to create the db and the schema.
$appKernel = new Kernel('test', false);
$appKernel->boot();
$application = new Application($appKernel);
$application->setCatchExceptions(false);
$application->setAutoExit(false);
$application->run(new ArrayInput([
'command' => 'doctrine:database:drop',
'--force' => '1',
]));
$application->run(new ArrayInput([
'command' => 'doctrine:database:create',
]));
$application->run(new ArrayInput([
'command' => 'doctrine:schema:create',
]));
$appKernel->shutdown();
And added use ReloadDatabaseTrait; at the beginning of my TestClass.
Microsoft is painfully vague on the details of this but:
Add a role assignment to your key vault in the IAM tab.
Choose Key Vault Certificate User (or whatever role you chose)
For users choose "Users, group, or service principal". In the selection menu search for "microsoft azure app service". This will bring up the built-in service SPN which is needed to bind the certificate in Key Vault (you'll notice its application id is abfa0a7c-a6b6-4736-8310-5855508787cd).
I don't think you even need the user assigned managed identity once this in-built SPN is set up but you can test that.
Used in dotnet new react to launch nodejs app.
Glance over aspnetcore repo, folder \src\Middleware\Spa\SpaProxy\
npm start and adds SpaProxyMiddlewarefrom io import BytesIO
import twain
import tkinter as tk
from tkinter import ttk, messagebox, filedialog
import logging
import PIL.ImageTk
import PIL.Image
import datetime
scanned_image = None
current_settings = {
'scan_mode': 'Color',
'resolution': 300,
'document_size': 'A4',
'document_type': 'Normal',
'auto_crop': False,
'brightness': 0,
'contrast': 0,
'destination': 'File',
'file_format': 'JPEG',
'file_path': ''
}
def check_adf_support(src):
"""Check if the scanner supports ADF and return ADF status"""
try:
# Check if ADF is supported
if src.get_capability(twain.CAP_FEEDERENABLED):
print("ADF is supported by this scanner")
# Check if ADF is loaded with documents
if src.get_capability(twain.CAP_FEEDERLOADED):
print("ADF has documents loaded")
return True
else:
print("ADF is empty")
return False
else:
print("ADF is not supported")
return False
except twain.excTWCC_CAPUNSUPPORTED:
print("ADF capability not supported")
return False
def apply_settings_to_scanner(src):
"""Apply the current settings to the scanner source"""
try:
# Set basic scan parameters
if current_settings['scan_mode'] == 'Color':
src.set_capability(twain.ICAP_PIXELTYPE, twain.TWPT_RGB)
elif current_settings['scan_mode'] == 'Grayscale':
src.set_capability(twain.ICAP_PIXELTYPE, twain.TWPT_GRAY)
else: # Black & White
src.set_capability(twain.ICAP_PIXELTYPE, twain.TWPT_BW)
src.set_capability(twain.ICAP_XRESOLUTION, float(current_settings['resolution']))
src.set_capability(twain.ICAP_YRESOLUTION, float(current_settings['resolution']))
# Set document size (simplified)
if current_settings['document_size'] == 'A4':
src.set_capability(twain.ICAP_SUPPORTEDSIZES, twain.TWSS_A4)
# Set brightness and contrast if supported
src.set_capability(twain.ICAP_BRIGHTNESS, float(current_settings['brightness']))
src.set_capability(twain.ICAP_CONTRAST, float(current_settings['contrast']))
# Set auto crop if supported
if current_settings['auto_crop']:
src.set_capability(twain.ICAP_AUTOMATICBORDERDETECTION, True)
except twain.excTWCC_CAPUNSUPPORTED:
print("Some capabilities are not supported by this scanner")
def process_scanned_image(img):
"""Handle the scanned image (save or display)"""
global scanned_image
# Save to file if destination is set to file
if current_settings['destination'] == 'File' and current_settings['file_path']:
file_ext = current_settings['file_format'].lower()
if file_ext == 'jpeg':
file_ext = 'jpg'
# Add timestamp to filename for ADF scans
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S_%f")
img.save(f"{current_settings['file_path']}_{timestamp}.{file_ext}")
# Display in UI (only the last image for ADF)
width, height = img.size
factor = 600.0 / width
scanned_image = PIL.ImageTk.PhotoImage(img.resize(size=(int(width * factor), int(height * factor))))
image_frame.destroy()
ttk.Label(root, image=scanned_image).pack(side="left", fill="both", expand=1)
def scan():
global scanned_image
with twain.SourceManager(root) as sm:
src = sm.open_source()
if src:
try:
# Check ADF support
adf_supported = check_adf_support(src)
# Apply settings before scanning
apply_settings_to_scanner(src)
if adf_supported:
# Enable ADF mode
src.set_capability(twain.CAP_FEEDERENABLED, True)
src.set_capability(twain.CAP_AUTOFEED, True)
print("Scanning using ADF mode...")
else:
print("Scanning in flatbed mode...")
# Scan loop for ADF (will scan once if flatbed)
while True:
src.request_acquire(show_ui=False, modal_ui=False)
(handle, remaining_count) = src.xfer_image_natively()
if handle is None:
break
bmp_bytes = twain.dib_to_bm_file(handle)
img = PIL.Image.open(BytesIO(bmp_bytes), formats=["bmp"])
process_scanned_image(img)
# Break if no more documents in ADF
if remaining_count == 0:
break
except Exception as e:
messagebox.showerror("Scan Error", f"Error during scanning: {e}")
finally:
src.destroy()
else:
messagebox.showwarning("Warning", "No scanner selected")
def test_adf_support():
"""Test if ADF is supported and show result in messagebox"""
with twain.SourceManager(root) as sm:
src = sm.open_source()
if src:
try:
# Check basic ADF support
try:
has_adf = src.get_capability(twain.CAP_FEEDER)
except:
has_adf = False
# Check more detailed ADF capabilities
capabilities = {
'CAP_FEEDER': has_adf,
'CAP_FEEDERENABLED': False,
'CAP_FEEDERLOADED': False,
'CAP_AUTOFEED': False,
'CAP_FEEDERPREP': False
}
for cap in capabilities.keys():
try:
capabilities[cap] = src.get_capability(getattr(twain, cap))
except:
pass
# Build results message
result_msg = "ADF Test Results:\n\n"
result_msg += f"Basic ADF Support: {'Yes' if capabilities['CAP_FEEDER'] else 'No'}\n"
result_msg += f"ADF Enabled: {'Yes' if capabilities['CAP_FEEDERENABLED'] else 'No'}\n"
result_msg += f"Documents Loaded: {'Yes' if capabilities['CAP_FEEDERLOADED'] else 'No'}\n"
result_msg += f"Auto-feed Available: {'Yes' if capabilities['CAP_AUTOFEED'] else 'No'}\n"
result_msg += f"Needs Preparation: {'Yes' if capabilities['CAP_FEEDERPREP'] else 'No'}\n"
messagebox.showinfo("ADF Test", result_msg)
except Exception as e:
messagebox.showerror("Error", f"Error testing ADF: {e}")
finally:
src.destroy()
else:
messagebox.showwarning("Warning", "No scanner selected")
def browse_file():
filename = filedialog.asksaveasfilename(
defaultextension=f".{current_settings['file_format'].lower()}",
filetypes=[(f"{current_settings['file_format']} files", f"*.{current_settings['file_format'].lower()}")]
)
if filename:
current_settings['file_path'] = filename
file_path_var.set(filename)
def update_setting(setting_name, value):
current_settings[setting_name] = value
if setting_name == 'file_format' and current_settings['file_path']:
# Update file extension if file path exists
base_path = current_settings['file_path'].rsplit('.', 1)[0]
current_settings['file_path'] = base_path
file_path_var.set(base_path)
def create_settings_panel(parent):
# Scan Mode
ttk.Label(parent, text="Scan Mode:").grid(row=0, column=0, sticky='w')
scan_mode = ttk.Combobox(parent, values=['Color', 'Grayscale', 'Black & White'], state='readonly')
scan_mode.set(current_settings['scan_mode'])
scan_mode.grid(row=0, column=1, sticky='ew')
scan_mode.bind('<<ComboboxSelected>>', lambda e: update_setting('scan_mode', scan_mode.get()))
# Resolution
ttk.Label(parent, text="Resolution (DPI):").grid(row=1, column=0, sticky='w')
resolution = ttk.Combobox(parent, values=[75, 150, 300, 600, 1200], state='readonly')
resolution.set(current_settings['resolution'])
resolution.grid(row=1, column=1, sticky='ew')
resolution.bind('<<ComboboxSelected>>', lambda e: update_setting('resolution', int(resolution.get())))
# Document Size
ttk.Label(parent, text="Document Size:").grid(row=2, column=0, sticky='w')
doc_size = ttk.Combobox(parent, values=['A4', 'Letter', 'Legal', 'Auto'], state='readonly')
doc_size.set(current_settings['document_size'])
doc_size.grid(row=2, column=1, sticky='ew')
doc_size.bind('<<ComboboxSelected>>', lambda e: update_setting('document_size', doc_size.get()))
# Document Type
ttk.Label(parent, text="Document Type:").grid(row=3, column=0, sticky='w')
doc_type = ttk.Combobox(parent, values=['Normal', 'Text', 'Photo', 'Magazine'], state='readonly')
doc_type.set(current_settings['document_type'])
doc_type.grid(row=3, column=1, sticky='ew')
doc_type.bind('<<ComboboxSelected>>', lambda e: update_setting('document_type', doc_type.get()))
# Auto Crop
auto_crop = tk.BooleanVar(value=current_settings['auto_crop'])
ttk.Checkbutton(parent, text="Auto Crop", variable=auto_crop,
command=lambda: update_setting('auto_crop', auto_crop.get())).grid(row=4, column=0, columnspan=2, sticky='w')
# Brightness
ttk.Label(parent, text="Brightness:").grid(row=5, column=0, sticky='w')
brightness = ttk.Scale(parent, from_=-100, to=100, value=current_settings['brightness'])
brightness.grid(row=5, column=1, sticky='ew')
brightness.bind('<ButtonRelease-1>', lambda e: update_setting('brightness', brightness.get()))
# Contrast
ttk.Label(parent, text="Contrast:").grid(row=6, column=0, sticky='w')
contrast = ttk.Scale(parent, from_=-100, to=100, value=current_settings['contrast'])
contrast.grid(row=6, column=1, sticky='ew')
contrast.bind('<ButtonRelease-1>', lambda e: update_setting('contrast', contrast.get()))
# Destination
ttk.Label(parent, text="Destination:").grid(row=7, column=0, sticky='w')
dest_frame = ttk.Frame(parent)
dest_frame.grid(row=7, column=1, sticky='ew')
destination = tk.StringVar(value=current_settings['destination'])
ttk.Radiobutton(dest_frame, text="Screen", variable=destination, value="Screen",
command=lambda: update_setting('destination', destination.get())).pack(side='left')
ttk.Radiobutton(dest_frame, text="File", variable=destination, value="File",
command=lambda: update_setting('destination', destination.get())).pack(side='left')
# File Format
ttk.Label(parent, text="File Format:").grid(row=8, column=0, sticky='w')
file_format = ttk.Combobox(parent, values=['JPEG', 'PNG', 'BMP', 'TIFF'], state='readonly')
file_format.set(current_settings['file_format'])
file_format.grid(row=8, column=1, sticky='ew')
file_format.bind('<<ComboboxSelected>>', lambda e: update_setting('file_format', file_format.get()))
# File Path
ttk.Label(parent, text="File Path:").grid(row=9, column=0, sticky='w')
global file_path_var
file_path_var = tk.StringVar(value=current_settings['file_path'])
path_frame = ttk.Frame(parent)
path_frame.grid(row=9, column=1, sticky='ew')
ttk.Entry(path_frame, textvariable=file_path_var).pack(side='left', fill='x', expand=True)
ttk.Button(path_frame, text="Browse...", command=browse_file).pack(side='left')
# Scan Button
ttk.Button(parent, text="Scan", command=scan).grid(row=10, column=0, columnspan=2, pady=10)
# ADF Test Button
ttk.Button(parent, text="Test ADF Support", command=test_adf_support).grid(row=11, column=0, columnspan=2, pady=5)
# Main application setup
logging.basicConfig(level=logging.DEBUG)
root = tk.Tk()
root.title("Scanner Application with ADF Test")
# Main frame
main_frame = ttk.Frame(root, padding=10)
main_frame.pack(fill='both', expand=True)
# Settings panel on the left
settings_frame = ttk.LabelFrame(main_frame, text="Scanner Settings", padding=10)
settings_frame.pack(side='left', fill='y')
# Image display area on the right
image_frame = ttk.Frame(main_frame)
image_frame.pack(side='right', fill='both', expand=True)
create_settings_panel(settings_frame)
root.mainloop()
I have this for only flat bet mode.
But I want ADF mode using python.
Anybody experieced?
You can try rclone, which can sync AWS S3 directly with LocalStack S3.
I think it's NOT a real answer, but a workaround. MS should work on this.
This article helped me. Basically, I had to delete the "My Exported Templates" folder and now Visual Studio created the Folder and the Template.
please restart the database, it will work
"message": "could not execute statement; SQL [n/a]; nested exception is org.hibernate.PessimisticLockException: could not execute statement",
Select @@innodb_lock_wait_timeout;
set innodb_lock_wait_timeout=100
"HttpClient": {
"DefaultProxy": {
"Enabled": true,
"Address": "http://your-proxy-server:port",
"BypassOnLocal": false,
"UseDefaultCredentials": false
}
}
Hi here's couple of hints.
First, are you using a form to send data, sometimes it's just simple as that.
Second, is your token well set into the form ? or html page ?
And third, try to make sure that your model is bonded correctly with your project.
Good luck
Using this CSS finally solved it
code {
font-family: "verdana";
font-size: 18px;
color: black;
font-weight: bold !important;
line-height: 1.5 !important;
}
@media (max-width: 480px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 481px) and (max-width: 767px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 768px) and (max-width: 1024px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 1025px) and (max-width: 1280px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 1281px) {
code {
font: 18px "verdana" !important;
}
}
To prevent Flutter from uninstalling and reinstalling the app every time you run flutter run, try these:
Connect your device to the device where Flutter is installed, then open the command prompt and run the flutter logs command. After that, launch the app you want to debug.
This issue might be if your machine time is out of sync
To fix this issue with SQLAlchemy, you need to use the correct driver syntax. For MySQL connections with public key retrieval, use mysql+pymysql:// instead of mysql:// and add the parameters to the query string:
engine = create_engine('mysql+pymysql://poopoo:peepee@localhost:32913/test?allowPublicKeyRetrieval=true&useSSL=false')
If you're still getting errors, you can also try passing these parameters via connect_args with the correct format:
engine = create_engine(
'mysql+pymysql://poopoo:peepee@localhost:32913/test',
connect_args={
"ssl": {"ssl_mode": "DISABLED"},
"allow_public_key_retrieval": True
}
)
Make sure you have the pymysql driver installed: pip install pymysql.
Install @nestjs/platform-express that is compatible with your project. Try a lower version of @nestjs/platform-express
I attach also my issue, which is similar, I think a solution is to override the _receive method of the gremlin-python connection.py module, simply trying a reconnection before the 'finally' statement which puts it back into the pool.
The way I go about this is using ipykernel in my virtual environments which I need to use with the Jupyter notebooks so that I can switch to the appropriate environments when using the notebook.
All I need to do is to switch between environments to make sure I only need the one that is for the Jupyter notebook.
P.S. a new utility is in town called Puppy. You might want to give this a read!
Okay I think there was just something wrong with my build and it was throwing me off because the debugger stopped working (tried adding breakpoints and getting errors that they wouldnt be hit/couldnt load symbols). I rebuilt everything and it seems to be working. Appreciate all the help!
7 years later but here to comment that we are happy with the layout [here](https://epiforecasts.io/EpiNow2/stan/), incase anyone is still interested.
AWS SDK in general has a good way for looking for configuration with minimal intervention / configuration from the developer, as long as you make sure you have the necessary configuration in place, either it's giving necessary IAM Policy access or having temporary credentials placed in a config file.
Please have a deep dive into
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html
Which basically suggests you don't need to specify anything if you assign the right policy to the resource you are running your code at.
The QuadraticSum constructor takes in a heterogeneous list of numbers, linear expresisons and quadratic expressions, and adds them up. So objective_2 is just x+y in your code.
Generally, you do not need to invoke that constructor/use that type directly, just use operator overloads, QuadraticExpression(), and mathopt.fast_sum()
https://github.com/google/or-tools/blob/stable/ortools/math_opt/python/expressions.py#L27
here's my query but it does not show anything on the map.
json_build_object('type', 'Polygon','geometry', ST_AsGeoJSON(ST_Transform(geom, 4326))::json)::text as geojson
any idea?
Please refer to Joel Geraci reply from this blog, where the save is registered with a callback, from here you can use the updated data.
same question,
stream splits tool_calls and returns data: {"choices":[{"delta":{"content":null,"tool_calls":[{"function":{"arguments":"{\"city\": \""},"
so that the complete parameters cannot be obtained, but I don't know how to solve it
Map<String, Object> arguments = ModelOptionsUtils.jsonToMap(functionInput);
Inspired by @Oliver Matthews' answer, I create a repository on markdown-include: https://github.com/atlasean/markdown-include .
Few points for consideration:
You can achieve it by using Synchronization or thread-safe.
You are using synchronization in your code which is correct. But you are trying to use same Scanner object across multiple threads. Scanner is not thread-safe. Each thread should ideally use its own scanner or you should synchronize access to the Scanner.
You are calling addingForFirstList for all threads, which means you are adding values to wrong lists. You should call appropriate method for each list.
You are using separate locks for each list, which is correct and it ensures each list is accessed independently.
You may also consider Executor Framework, which provides high-level API for managing threads. And it's crucial to manage shared resources like Scanner
import matplotlib.pyplot as plt
# Data for the diagram
categories = [
"Водогосподарська проблема",
"Енергетична проблема",
"Сировинна проблема",
"Глобальна продовольча проблема",
"Охорона Світового океану",
"Освоєння космосу"
]
importance = [9, 8, 7, 8, 6, 5] # Importance rating from 1 to 10 for each problem
# Create the diagram
plt.figure(figsize=(10,6))
plt.barh(categories, importance, color='skyblue')
plt.xlabel("Важливість проблеми (оцінка від 1 до 10)")
plt.title("Оцінка важливості глобальних проблем в системі «суспільство-природа» в Україні")
plt.gca().invert_yaxis() # Invert the order on the Y-axis
plt.tight_layout()
plt.show()
Efectivamente funciona con "react-native-google-places-autocomplete": "^2.5.6" aunque en mi caso y me imagino que para el de todos no hace falta instalar 'react-native-get-random-values' ya que esta version al parecer no la necesita, de hecho te puedes ahorrar muchos errores desinstalando 'react-native-get-random-values' si ya lo habias instalado.
You can use Reactotron. It's easy to setup.
link here https://docs.infinite.red/reactotron/
I decided to drop this idea, because from what I found nginx and Apache are not able to use certificate in the middle of the chain as CA to authenticate clients. In case anyone wonders, I ended up using same self-signed CA and same client certificate, only I pinned its fingerprint for the site when I planned to use admin-ca.cer certificate.
it shows in milliseconds, you have to divide this value by 1000 and get the right value in seconds
The url field contains data:application/pdf;base64,..., which might not be properly handled by some clients. Try sending the Base64 content separately in a downloadable format instead of embedding it directly in a url.
Force handling errors globally:
Configured Flask and Flask-RESTful to propagate JWT exceptions correctly by adding the following code to init.py:
app.config['PROPAGATE_EXCEPTIONS'] = True # Propagate exceptions to the client
api.handle_errors = False # Disable Flask-RESTful
This provided the results I was looking for and I successfully tested the JWT lifecycle:
Login: Issued JWT tokens via /api/login.
Valid Token: Accessed protected resource successfully.
Expired Token: Received expected 401 error ("Token has expired").
Token Refresh: Successfully refreshed JWT token via /api/refresh.
New Token: Validated new token with protected endpoint access.
In 2025 using base-select allows styling the <select> see details here
https://developer.chrome.com/blog/a-customizable-select and https://codepen.io/web-dot-dev/pen/zxYaXzZ
At the time of writing this only supports chrome and edge - https://caniuse.com/mdn-css_properties_appearance_base-select
Suprememobiles, One Stop Destination for all Electronics Needs 5g Mobile phones & Tablets | Laptops | Smart TV's | Smartwatch | Earbuds | Headphone | Home Appliances. We are dealing multi-brand like Apple, Samsung, Xiaomi, Realme, Vivo phone, Oppo phone, Motorola phone, OnePlus, Nokia, Tecno.
Try this small util I wrote. While searching for the same topic, I found your question
https://github.com/denistiano/bertsonify
I was solving the same thing. If trained with quality data seems to give okay results. Obviously the more complex the object, the harder it is to have quality output.
Welcome to Windows app development! Here's a breakdown of compatibility for different Windows versions and some guidance on how to approach development.
Microsoft has had different development frameworks for its platforms, and compatibility depends on which one you are using:
✅ Windows Phone 7.8 – Not fully compatible. While some APIs may work, Windows Phone 8 introduced new capabilities (e.g., native code support, Direct3D) that are not backward compatible. You would need to target Windows Phone 7.x separately.
❌ Windows RT – Not compatible. Windows RT (for ARM-based tablets) runs apps built for Windows Store (Metro/Modern UI apps), not Windows Phone.
❌ Windows 8 / 8.1 – Not directly compatible. Windows 8 and 8.1 use WinRT (Windows Runtime), which is different from Windows Phone 8 SDK. However, you can share code if you create a Universal App for both platforms.
✅ Windows RT apps work on Windows 8/8.1 but not on Windows Phone without modification.
❌ Windows Phone apps won’t run on Windows 8/8.1 or RT without adaptation.
If you want your app to run across multiple platforms, consider these approaches:
Use Windows Phone 7.x SDK (if targeting WP7.8)
If your app must support Windows Phone 7.8, use the Windows Phone SDK 7.1 (not 8.0).
However, WP7.8 is very outdated, and it’s better to focus on newer versions.
Develop a Universal Windows App (for Windows 8.1 and Windows Phone 8.1)
If you want to support both Windows Phone 8.1 and Windows 8.1, use the Universal Windows App framework.
This lets you share a common codebase while keeping platform-specific optimizations.
Target UWP (Universal Windows Platform) for Future-Proofing
Using the outline utility, you can change the focus border color in Tailwind CSS.
For example:
<input
className="focus:outline-gray-400"
/>
Yes you are correct play store or app store does not allow you to publish app for only some certain cities. But you could handle it from your end, by fetching the current location of the user. you will easily get latitude and longitude of the user, using which you can easily get the user's state or city, then proceed with your logic only if the user is at your desired location.
Sequoia 15.3.2 M3 chip.
brew install cmake didn't work for me, so I tried next:
brew install pkg-config
brew install cmake libgit2
bundle install
I read the article linked in the first comment and solved it!
#include <bits/stdc++.h>
Instead of adding a header file like above, I added only the necessary files as shown below.
#include <string>
#include <array>
#include <bitset>
#include <utility>
#include <iostream>
#include <iomanip>
#include <future>
And adding -stdlib=libc++ to the compile command solved it!
g++ -std=c++17 -stdlib=libc++ -I/opt/homebrew/opt/cryptopp/include/cryptopp -L/opt/homebrew/opt/cryptopp/lib -lcryptopp DES_Encryption.cpp -o DES_Encryption && ./DES_Encryption
Thank you to everyone who responded.
Starting at line 85 you can see the issue. That shouldn't be there.
}
[root@server ~]# sudo vi /etc/nginx/nginx.conf
1. Create a batch file "run.bat" and write a set of commands in that file
2. Create a Task in Task Scheduler and give a proper name, and in the triggers section Choose "Weekly" and set it to run every Monday at your desired time.
3. choose program in the action and choose the batch file created in the above.
4. save
You are mixing two different packages, which are both shadcn ports for flutter.
The example is from shadcn_flutter but the tabs are from shadcn_ui. Try to use one of the two.
var userName = result.ClaimsPrincipal?.Claims
.FirstOrDefault(c =\> c.Type.Equal("name",OrdinalIgnoreCase))?.Value;
As per docs https://cloud.google.com/compute/docs/create-windows-server-vm-instance, windows is not covered under free trial
Possible next step to try is to activate a full billing account https://cloud.google.com/free/docs/free-cloud-features#how-to-upgrade
dom-speech-recognition
https://www.npmjs.com/package/@types/dom-speech-recognition
I'm experiencing the same issue. When I download it, it pulls empty data and doesn't show anything. This problem probably started today. Unfortunately, we weren't able to send reports to our clients.
I create a repository on markdown-include: https://github.com/atlazean/markdown-include
The Chart Visualizer in Megaladata does not directly support aggregation operations. It is designed to display the relationship between fields.
To aggregate data before visualization:
1. Use the Grouping component to aggregate your data (e.g., sum, average, count).
2. Use the aggregated data as input for the Chart Visualizer.
If the program terminates abruptly due to unhandled exceptions, destructor for global objects might not be called. This happens because operating system might not give enough time to cleanup resources properly.
Try using smart pointers for resource management to avoid such issues.
I there any update on how this can be done. Currently I also need similar functionality in table
Thanks to @JulianKoster, I realized that asset-mapper wasn't installed, causing this issue, here is the fix:
composer require symfony/asset-mapper symfony/asset symfony/twig-pack
Read: https://symfony.com/doc/current/frontend/asset_mapper.html
For this level of detail you'll need to write a custom reporter plug-in. This will have the ability to introspect the whole workflow and you can also examine input/output files themselves (needed for 3) which a generic plugin is not going to do.
You could then export all the info as JSON, or you could have your plugin update your database directly.
See:
https://github.com/snakemake/snakemake-interface-report-plugins/tree/main
And here is an example of a plugin:
Note - when making a test plugin I found that using poetry, as suggested here, was more of a hindrance than a help, but YMMV.
You could use COALESCE to change the NULL values.
select
coalesce(path, '') as path
,comment
from read_csv('${path}')
I prefer sonarLint , it highlights possible NullPointerException risks.
Thanks for all the useful remarks,
Seems like std::bitcast is the right way to do this. It requires C++20 but I think I'm ok with that.
apparently i was missing the @app.function_name for every function and i had to fix the imports from
from . import RSSNewsletter
to
import RSSNewsletter
One of the best tools I use in my apps is the Talker package. It provides a logs screen to track every log and error in your app. check the doc here
My first suspicion here would be a memory-related error. These will show up in the kernel log:
$ sudo dmesg -T
as OOM events. You could also use strace on the application to look for malloc() calls that fail, but you do have to make sure you run strace on the underlying binary, not any wrapper script that might be being invoked in the rule.
If the application has just enough memory available when run outside of Snakemake then it may be the overhead of Snakemake pushing it over the edge. Also, with Snakemake are you running one job at a time or allowing it to run multiple jobs in parallel? Are you using multiple threads within the rule?
How to do
Method 01:
You can simply create an AWS data sync task for the particular work.
First Create Data sync task with source and destination location. Since you try to share data from AWS S3 to another S3 this task won't need any AWS agent.
Then RUN the task data will be migrate to destination S3 (Time will depend on the total size of data that you going to migrate)
Price:
Method 02:
You can use Cross Region Replication (CRR) enable on S3.
These articles will guide you to how to replicate the existing S3 bucket. I think this method will be more cost-effective way for your task.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
https://aws.amazon.com/getting-started/hands-on/replicate-data-using-amazon-s3-replication/
I prefer using SonarQueue for IDE plugin. It shows more ptential problems, describes whats and whys.
enter image description here Now it only supports .exe and .MSI format packages.
I ran into the same problem and solved it. In my case, there is a space in the font name: (Spleen 32x64). And instead of entering:
Spleen 32x64
In the "Font Family", I simply add quote marks, like:
"Spleen 32x64"
And it works.
The surprising result you're seeing where an O(nlogn) algorithm performs faster than an O(n) algorithm is due to several practical factors:
Constant Factors and Lower-Level Operations: Even though the theoretical time complexity of sorting is O(nlogn), the constants involved in sorting (like in Timsort, which is the algorithm used by Python's sort()) can sometimes outperform O(n) solutions, especially when the input size is small or when the implementation of the O(n) solution involves costly operations.
Efficient Sorting Algorithms: The Timsort algorithm used in Python is highly optimized for practical use cases. It is particularly fast on real-world data, especially if there are ordered or partially ordered sequences in the input. Even though the sorting step theoretically has higher time complexity, in practice, it can run faster because of optimizations that reduce the constant factors.
Set Operations Overhead: In your O(n) solution, you're relying heavily on set operations, specifically in and add. While these operations are average O(1), they can sometimes take more time than expected because of factors like hash collisions, dynamic resizing, or poor cache locality when iterating over the set. These operations might not be as fast as they theoretically should be, especially when you're performing a lot of lookups or insertions.
Repeated Operations in the First Algorithm: In your first algorithm, you're doing the following:
while (num + 1) in s:
num += 1
current_streak += 1
This loop could lead to repeated set lookups for numbers that are consecutive. Since you're iterating over nums and performing a lookup operation for every number in the set, this could end up causing a lot of redundant work. Specifically, for each number, you're incrementing num and repeatedly checking num + 1. If there are a lot of consecutive numbers, this can quickly become inefficient.
The time complexity here might still be O(n) in theory, but due to the redundant operations, you're hitting a performance bottleneck, leading to TLE.
Efficiency of the Second Algorithm: In the second algorithm, you've made a few optimizations:
next_num = num + 1
while next_num in nums:
next_num += 1
Here, the check for next_num in nums is still O(1) on average, and the update to next_num skips over consecutive numbers directly without performing additional redundant lookups. This change reduces the number of unnecessary checks, improving the algorithm’s performance and avoiding redundant work.
Even though the theoretical time complexity is the same in both cases (O(n)), the second version is faster because it avoids unnecessary operations and works more efficiently with the set lookups.
Impact of Set Operations: In the first solution, you may have faced inefficiencies due to the use of the current_streak variable and updating num during iteration. Additionally, by modifying num in the loop, you're creating potential confusion and inefficient memory access patterns (e.g., reusing the same variable and performing multiple lookups for numbers that are already part of the streak).
The second solution benefits from using next_num as a separate variable, which simplifies the logic and makes the code more efficient by focusing on skipping over consecutive numbers directly without redundant checks.
O(nlogn) solutions can sometimes perform faster than O(n) in practice due to constant factors, the specific nature of the data, and the efficiency of underlying algorithms like Timsort.
Your first O(n) solution caused TLE due to redundant operations and inefficiencies in how consecutive numbers were processed.
Your second O(n) solution passed because it streamlined the logic, minimized redundant operations, and worked more efficiently with the set data structure.
Optimizing algorithms often involves reducing redundant operations and ensuring that you don't perform the same work multiple times. Even with the same time complexity, how you structure the code and the operations you choose can significantly affect performance.
It seems to have been fixed in latest release (65.6.0).
val_counts = df["x"].value_counts()
filtered_df = df[df["x"].map(val_counts) <= ceiling]
By default tooltip aggregates the data from one xAxis, but you can override it with a tooltip.formatter, see the link to the API: https://api.highcharts.com/highcharts/tooltip.formatter
The starting point can be like this:
tooltip: {
shared: true,
formatter: function () {
let tooltipText = '<b>' + this.x + '</b>';
this.points.forEach(point => {
tooltipText += '<br/>' + point.series.name + ': ' + point.y;
});
return tooltipText;
}
}
Please see a simplified config, where you can get the shared tooltip for multiple axes, I trust you will be able to adjust it for your project: https://jsfiddle.net/BlackLabel/pvr1zg26/
ISO certification itself doesn’t guarantee anything about the language (like English, Spanish, etc.) being used.
Instead, ISO standards focus on processes, quality, consistency, and compliance, regardless of the language.
For example:
ISO 9001 (Quality Management) ensures an organization follows consistent quality processes.
ISO 27001 (Information Security) ensures data is protected based on defined standards.
These standards can be documented and implemented in any language as long as:
The processes are clearly understood.
The implementation matches the intent of the ISO standard.
The audit documentation is available in a language the auditor understand
I have the following challenge. Im using dapper to access 2 databases in the same codebase.
Database 1: Uses UTC dates (i could change this but would not like to do that)
Database 2: Uses LocalDates (not something i can change)
These typehandlers are static, what means not repository/connectionstring specific
SqlMapper.AddTypeHandler(new DateTimeUtcHelper());
Any idea's how to solve this problem?
(Could implement datetimeoffset in Database 1 so the datatype is different)
When dealing with localized strings in Swift, especially for UI elements, choosing the right approach is crucial. Here’s a breakdown of the options:
LocalizedStringKey (Best for SwiftUI)Use when: You are directly using a string in a SwiftUI view (e.g., Text("hello_world")).
Why? SwiftUI automatically localizes LocalizedStringKey, making it the best choice for UI text.
Example:
Text("hello_world") // Automatically looks for "hello_world" in Localizable.strings
Pros:
✅ No need to manually use NSLocalizedString
✅ Cleaner SwiftUI code
✅ Supports string interpolation
Cons:
❌ Can’t be used outside SwiftUI (e.g., in business logic)
LocalizedStringResource (Best for Performance)Use when: You need efficient string translation with better memory handling.
Introduced in: iOS 16
Why? It is more optimized than LocalizedStringKey, but still works well with SwiftUI.
Example:
Text(LocalizedStringResource("hello_world"))
Pros:
✅ More optimized for localization
✅ Reduces memory overhead
Cons:
❌ Requires iOS 16+
String with NSLocalizedString (Best for Non-SwiftUI Code)Use when: You are not using SwiftUI, but need translations in ViewModels, controllers, or business logic.
Why? NSLocalizedString fetches translations from Localizable.strings.
Example:
let greeting = NSLocalizedString("hello_world", comment: "Greeting message") print(greeting)
Pros:
✅ Works anywhere (UIKit, business logic, networking)
✅ Supports dynamic strings
Cons:
❌ Not automatically localized in SwiftUI
❌ More verbose
In the "Plots" panel you have the "zoom" option, which detachs the plot window and allows you to visualize it full-screen. Usually, the resolution doesn't drop in the process. If you want to inspect the plot in the IDE, that's a good solution.
Additionally, if you want to quickly export the file, you can just take a screenshot of the full-screen plot.
Same issue here. Fresh setup for Eclipse 2025-03.
Windows -> Prefeneces -> Version Control -> select SVN node will produce:
I didn't find the bug (code seems ok) but I wouldn't disable the gravity in runtime. Instead I would set isKinematic flag on/off, in this way (when isKinematic is on) you know that no forces are affecting your player. And for the slopes I would just apply a bigger force.
Not having the exact same issues as you, but definitely having issues in this update. Preview is super slow and buggy. As soon as I use a textfield anywhere even on a basic test, I am getting the error: this application, or a library it uses, has passed an invalid numeric value (NaN, or not-a-number) to CoreGraphics API and this value is being ignored. Please fix this problem.in the console. Build times definitely seem soooooo much slower, its making the process annoying when it doesn't need to be.
I've cleaned the derived data, tried killing every Xcode process going, restarted a billion times lol. Great update this time around.
qpdf input.pdf --overlay stamp.pdf --repeat=z -- result.pdf
This dropdown behavior is likely managed by a TabControl or a custom tabbed document manager within your application. Here are some areas to investigate:
1-Check TabControl Properties:
If you're using a TabControl, check if SizeMode is set to Fixed or FillToRight, as this can affect how the dropdown appears.
Look for TabControl properties like DrawMode, Padding, and Multiline that might be affecting the display.
2-Event Handling for Window Resizing:
If resizing triggers the dropdown to appear, the control might not be refreshing correctly. Look for Resize or Layout event handlers where the tab control is refreshed (Invalidate(), Refresh()).
3-ScintillaNET or Custom UI Code:
Since you’re using ScintillaNET, there might be a custom tab manager handling open documents. Check for any Scintilla or related UI event handlers that modify the tab behavior.
4-Force a Refresh When a Tab is Added:
If new tabs are being added dynamically, make sure the control is properly updated. Try manually forcing a redraw when a new tab is added:
tabControl.Invalidate();
tabControl.Update();
5-Debugging Strategy:
Set breakpoints in places where tabs are created, removed, or refreshed.
Try manually calling tabControl.Refresh() after adding tabs to see if it immediately triggers the dropdown.