Years ago, I needed a simple, reliable logger with zero dependencies for a Test Automation Framework. I ended up building one and just published it on GitHub and NuGet as ZeroFrictionLogger - MIT licensed and open source.
It looks like it could be a good fit for your question.
Can you please check the version of spring boot you are currently using.
registerLazyIfNotAlreadyRegistered occurs in spring boot 3.3+ while you might have some was introduced in Spring Boot 3.2+.
If you have mixed versions — for example, Spring Boot 3.3.x pulling in Spring Data JPA 3.3.x but another library in your project bringing an older Spring Data JPA (like 3.1.x or 2.x)
try running
mvn dependency:tree | grep spring-data-jpa
mvn dependency:tree | grep spring-data-commons
if u see 2 versions of jpa , then remove the older version
mvn dependency:tree -Dverbose | grep "spring-data-jpa"
The optimize-autoloader addition to composer.json works for custom classes like Models, but not for vendor classes like Carbon.
This can be achieved by publishing Tinker's config file, and adding Carbon as an alias.
Run php artisan vendor:publish --provider="Laravel\Tinker\TinkerServiceProvider" to generate `config/tinker.php` if the file doesn't already exist.
Edit the alias array in this file to alias carbon:
'alias' => [
'Carbon\Carbon' => 'Carbon',
]
Then, run composer dump-autoload.
Tinker should now automatically alias Carbon, allowing Carbon::now() to work without using the full namespace.
Please take a reference from this; you get a better understanding.
https://docs.oracle.com/javase/tutorial/jdbc/basics/sqlxml.html
$user = read-host "user"
$pass = read-host "pass " -AsSecureString
$cred = New-Object System.Management.Automation.PSCredential -ArgumentList $user, $pwd
Find-Package -Name "$packageName" -ProviderName NuGet -AllVersions -Credential $cred
Back in 2017, I was looking for modern-day Database ORM 😄
<!-- معرض الصور -->
<div class="gallery">
<img src="image1.jpg" onclick="openModal(0)">
<img src="image2.jpg" onclick="openModal(1)">
<img src="image3.jpg" onclick="openModal(2)">
</div>
<!-- نافذة عرض الصورة -->
<div id="modal" style="display:none;">
<button onclick="prevImage()">←</button>
<img id="modal-img" src="">
<button onclick="nextImage()">→</button>
<button onclick="closeModal()">إغلاق</button>
</div>
تعديل تطبيق
Change
src: url('../fonts/Rockness.ttf');
to
src: url('../fonts/Rockness.ttf') format('truetype');
The solution was to update resharper to the newest version 2025.2 (Build 252.0.20250812.71120 built on 2025-08-12)
content of e.g. logging.h
#include <QDebug>
#define LOG(Verbosity) q##Verbosity().noquote().nospace()
then use it like:
#include "logging.h"
QString abc = "bla";
LOG(Info) << abc;
LOG(Debug) << abc;
cheers
Thilo
This may not be the solution for your issue.
However, in my case I found that the issue was to do with an incompatible sql.js version that had been updated.
I found that versioning sql.js in my package.json to ~1.12.0 resolved this issue for me.
"sql.js": "~1.12.0",
first click on field, next write text
local username = splash:select('input[name=username]')
username:mouse_click() -- click on field
splash:send_text('foobar') -- write text
There is not <Head> component in App router, so this would work only with Pages router.
I think what you are looking for is JFrog Curation.
What Windows calls OwnerAuthFull is the base64-encoded lockout password (I believe this is terminology inherited from TPM 1.2). You can test it with tpm2_dictionarylockout -c -p file:key.bin, where key.bin contains that password after decoding it with base64 -d.
The TPM2 owner password (owner / storage hierarchy) is unset, you can verify that with this command:
# tpm2_getcap properties-variable | grep AuthSet
ownerAuthSet: 0
endorsementAuthSet: 0
lockoutAuthSet: 1
For me, works using dot as before index:
-DHttpServerConfig.sourceFilePath.0=qwerty -DHttpServerConfig.sourceFilePath.1=asdfg
If you set equal indexes, the last value overrides the previous.
I actually ran into the exact same struggle recently when trying to get Google Picker working in a Streamlit app (though I didn’t try it with ngrok). I’m more of a Python person too, so mixing in the JavaScript OAuth flow was… let’s just say “fun.” 😅
In the end, I decided to build a Streamlit component for it — wraps the Google Picker API and works with a normal OAuth2 flow in Python.
It supports:
Picking files or folders from Google Drive
Multi-select
Filtering by file type/MIME type
Returns st.file_uploader-style Python UploadedFile objects you can read right away
You can install it with:
pip install streamlit-google-picker
Might save you from fighting with the JavaScript side — and even if I didn’t try it with ngrok, there’s no reason it shouldn’t work.
You can also check the right way to setup the google cloud settings : Demo + setup google cloud guide (Medium)
I used this condition and it works, though the dialog does not show up in disambiguation:
intents.size() > 1 && intents.contains('intentName1') && intents.contains('intentName2')
As of August 2025 the location of the cl.exe is in the path:
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.44.35207\bin\Hostx64\x64
You have to install Visual Studio from here: https://visualstudio.microsoft.com/
Remember to select Desktop Development with C++ when installing otherwise cl.exe will not exist.
It's a bit of a workaround, but I think this answer might help: https://stackoverflow.com/a/58273676/15545685
Applied to your case:
library(tidyverse)
library(ggforce)
dat <- data.frame(
date = seq(as.Date('2024-01-01'),
as.Date('2024-12-31'),
'1 day'),
value = rnorm(366, 10, 3)
)
p0 <- dat |>
ggplot(aes(x = date, y = value)) +
geom_point() +
labs(x = NULL,
y = 'Value') +
theme_bw(base_size = 16) +
scale_x_date(date_labels = '%b %d') +
facet_zoom(xlim = c(as.Date("2024-06-01"), as.Date("2024-06-20")))
p1 <- p0 +
scale_x_date(breaks = seq(as.Date('2024-01-01'),
as.Date('2024-12-31'),
'1 month'),
limits = c(as.Date('2024-01-01'),
as.Date('2024-12-31')),
date_labels = '%b\n%Y')
gp0 <- ggplot_build(p0)
gp1 <- ggplot_build(p1)
k <- gp1$layout$layout$SCALE_X[gp1$layout$layout$name == "x"]
gp1$layout$panel_scales_x[[k]]$limits <- gp1$layout$panel_scales_x[[k]]$range$range
k <- gp1$layout$layout$PANEL[gp1$layout$layout$name == "x"]
gp1$layout$panel_params[[k]] <- gp0$layout$panel_params[[k]]
gt1 <- ggplot_gtable(gp1)
grid::grid.draw(gt1)
Replace with this
select[required] {
padding: 0;
background: transparent;
color: transparent;
border: none;
}
How about not using awk at all?
echo 255.255.192.0 | sh -c 'IFS=.; read m ; n=0; for o in $m ; do n=$((($n<<8) + $o)); done; s=$((1<<31)); p=0; while [ $(($n & $s)) -gt 0 ] ; do s=$((s>>1)) p=$((p+1)); done; echo $p'
Annotation: IFS is the input field separator; it splits the netmask into individual octets. So $m becomes the set of four numbers. The next variable $n is going to be the 32 bit number of the netmask, constructed by going over each octed $o (the iterator in the first loop) and shifting it 8 bits left. The second loop uses $s (the 'shifter') as a 32 bit number with only a single 1 bit, starting at position 32; while it shifts down it is compared (bitwise &) to the mask and the return value $p increases every time there is a 1 until there is no more match (so it stops at the first 0 bit).
discord.py (which nextcord is wrapper of) has resolved this issues in https://github.com/Rapptz/discord.py/issues/10207, i believe and update of the nextcord package or the discord.py package should resolve this issue
You can make modelValue a discriminated union key by range so TS can infer the correct type automatically. For example:
type Props =
| { range: true, modelValue: [number, number] }
| { range?: false, modelValue: number };
Then use that type in your defineProps and defineEmits so no casting is needed.
Maybe the unique parameter in the column annotation could help you?
#[ORM\Column(unique: true, name: "app_id", type: Types::BIGINT)]
private ?int $appId = null;
If the user is supposed to be unique, maybe a OneToOne Relation could be better than a ManyToOne. I am pretty sure using OneToOne will also generate a unique index in your migration for you, even without the unique parameter.
#[ORM\OneToOne(inversedBy: 'userMobileApp', cascade: ['persist', 'remove'])]
#[ORM\JoinColumn(name: "user_id, "nullable: false)]
private ?User $user = null;
After adding separate configurations for the two web applications, I'm encountering an issue with the custom binding for the second web app. I already have a setup for custom binding and DNS for the first web app.
Here's a lazy solution as compared to above answers here: my XCode project threw me this error as an iPad was connected for test. I tried deleting Deriveddata, re-starting XCode etc, but none of these helped. I ended up abandoning that project and creating the new one. The new project does not throw this error anymore.
If you are on Mac and have been running your script like this python my-script.py, you might want to try running it with sudo. I spent 30 minutes debugging correct code before realizing that "requests" needs sudo permissions
I have the same question. Unfortunately, both links in the highlighted answer are now outdated. Does anyone have newer info on this?
For the condition I tried:
#intentName1 && #intentName2
intents.contains('intentName1') && intents.contains('intentName2')
intents.values.contains('intentName1') && intents.values.contains('intentName2')
The first two didn't throw an error but the dialog was just skipped when I entered an utterance in which both intents were recognized. The final one threw an error:
SpEL evaluation error: Expression [intents.size() > 0 && intents.values.contains('intentName1') && intents.values.contains('intentName2') && @entityName] converted to [intents.size() > 0 && intents.values.contains('intentName1') && intents.values.contains('intentName2') && entities['entityName']?.value] at position 73: EL1008E: Property or field 'values' cannot be found on object of type 'CluIntentResponseList' - maybe not public or not valid?
In the plugin developed for OPA https://github.com/EOEPCA/keycloak-opa-plugin , it seems that the Admin UI was customised (see (js/apps/admin-ui/src/clients/authorization/policy).
you have to manully allow location access from phone setting by going to the phone setting > privacy and security > location > Safari/anyother brower.
Got it — sounds like you’re trying to bypass the whole “training” aspect and just hard-code your decision logic in a tree-like form. In that case, sklearn’s DecisionTreeClassifier isn’t really the right tool, since it’s built to learn from data. A custom tree structure, like the Node class example given, would give you more control and let you directly define each condition without needing any training step. This way, you still get the decision-tree behavior, but exactly how you’ve designed it.
Hi, try SELECT concat(REPLICATE('0', 16 - LEN(NAG)),NAG) as NAG16
where NAG is yourVarcharField and 16 is lenght that you need
Seems, this behaviour of SvelteKit is not replicable in NextJS. There is a similar feature in NextJS and it is called prerendering. But, prerendering only works for static pages.
For dynamic pages, the server components start to render on the server only after the page is navigated to. If needed, a suspense boundary can be used as a placeholder (which is displayed instantly) before the whole page is rendered.
With respect to wasted bandwidth of fetching links when it comes into viewport, @oscar-hermoso's answer of switching the prefetch option to on hover works.
After using both the frameworks, it feels as if SvelteKit is really thought out when it comes to frameworks. NextJS relies on CDN to make the site fast. SvelteKit uses a simple, but clever trick. So, when end users use the site, the SvelteKit version feels much faster.
For me I added this line to the top of requirements.txt` file. I was able to install the packages successfully.
torch==2.2.2
I don't know whether this is any help, but I fixed a similar issue justt by putting "" arround the echo line
I would recommend taking a look at the Mongoose Networking library.
It's a lightweight open-source networking library designed specifically for embedded systems. It includes full support for most networking protocols, including MQTT. With the MQTT support, you can build not just a client, but also a MQTT broker. The library is highly portable and has support for a wide variety of microcontrollers and platforms. It can run on a baremetal environment or with an RTOS like FreeRTOS or Zephyr.
Mongoose has a solid documentation of all its features and usages and you can find an example of building a MQTT simple server here.
Heads up: I am part of the Mongoose development team. Hope this solves your problem!
vmess://eyJ0eXBlIjoibm9uZSIsImhvc3QiOiJtZy1pbm5vY2VudC1hZGp1c3Qtc2hhcGUudHJ5Y2xvdWRmbGFyZS5jb20iLCJoZWFkZXJUeXBlIjoiIiwicG9ydCI6Ijg0NDMiLCJuZXQiOiJ3cyIsImlkIjoiM2RhNWQyN2YtMDhkMC00MDc4LWY4OTAtY2Y5NTBlY2IxNzA4IiwidiI6IjIiLCJzZXJ2aWNlTmFtZSI6Im5vbmUiLCJzZWVkIjoiIiwiZnJhZ21lbnQiOiIiLCJtb2RlIjoiIiwicGF0aCI6IlwvIiwidGxzIjoidGxzIiwiYWxwbiI6IiIsImFkZCI6IjEwNC4xNi4xMjUuMzIiLCJwcyI6IvCfjqxDQU1CT0RJQS1NRVRGT05FLfCfh7jwn4esIPCfpYAiLCJmcCI6ImNocm9tZSIsInNuaSI6Im1nLWlubm9jZW50LWFkanVzdC1zaGFwZS50cnljbG91ZGZsYXJlLmNvbSIsImRldmljZUlEIjoiIiwiYWlkIjoiMCIsImV4dHJhIjoiIn0=
Adding one more suggestion for Kube-clusters (Future reader may look ):
Check your clock is skewed or not by using these commands :chronyc tracking or Timedatectl status
If Leap Status is Not Synchronised then do NTP Synchronization.
The official SQLMesh documentation and source code currently focus on Slack and email as supported notification targets. There is no out-of-the-box support for Microsoft Teams mentioned.
However, since Teams supports incoming webhooks similar to Slack, you can likely adapt the Slack webhook configuration for Teams by:
Creating an Incoming Webhook in your Teams channel.
Using that webhook URL in your SQLMesh notification configuration.
Formatting the payload to match Teams' https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using
Try configuring a Teams webhook and test sending a JSON payload from SQLMesh using the same mechanism as Slack.
What I’d do in UiPath is pretty straightforward:
Read the new data from the first Excel file using Read Range (under the Modern Excel activities).
Read the existing data from the target sheet in the second file.
Combine them — put the new data above the existing data in a single DataTable using the Merge datatable activity or just by using newsatatable.clone and importing rows in the right order.
Write the merged table back to the target sheet using Write Range.
Basically, you’re replacing the sheet content with “new rows first, then old rows” instead of trying to physically insert rows at the top in Excel, which UiPath doesn’t handle directly.
reset someClass.someProp = null; before the second render, or use beforeEach to mock and reset state properly.
With WIndows 11, it was in addition necessary for me to add sqlservr.exe to the allowed firewall apps.
I followed those instructions:
https://docs.driveworkspro.com/Topic/HowToConfigureWindowsFirewallForSQLServer
Thanks for all,
Harald
I realize that this thread is really old, but perhaps it's still alive enough to find someone to help me out. On a daily basis, I have different documents in which I need to highlight certain words (they change with every doc). I'd like an easy way to tell Google Docs to highlight the words in yellow each time. The previous posts seem to provide some info, but I can't figure out how to get any of them to run properly. I'd like to envision a Google Docs "template" to which I would copy the text. Then, I could run some type of script based on the keywords (even if I have to manually edit the script each time) to highlight the words. I could then copy that altered text into the final document. But, I need step by step instructions on how to get this
Try to wrap it in CDATA construct.
An example present in the below link shows the case:
<![CDATA[
Within this Character Data block I can
use double dashes as much as I want (along with <, &, ', and ")
*and* %MyParamEntity; will be expanded to the text
"Has been expanded" ... however, I can't use
the CEND sequence. If I need to use CEND I must escape one of the
brackets or the greater-than sign using concatenated CDATA sections.
]]>
More to read:
What does <![CDATA[]]> in XML mean?
4 0 obj
(Identity)
endobj
5 0 obj
(Adobe)
endobj
8 0 obj
<<
/Filter /FlateDecode
/Length 178320
/Length1 537536
/Type /Stream
>>
stream
xœì½
`TÅõ?>sï¾²ÙÜlÞ›lBnØ$’„‡7 „$ä "f“Ý$‹›‡»^""`DEÅ7"ZŠÔ.Ñ"*Eªˆ/ªV¢¥–úªµˆV…üÏÌÜ »
ZõÛþþí÷»ssÎçœyÏ™™3s³y ŒŠ¦B-¥õ•ÓsÿnCºXBI»ÊŠKæD;ÊŠ«JÖ>Pü‰
¦é¥eåûŸã(RÍz!Í ÓkkêW7ôªaŸž3½¾±øódý8„åùHíª©ÏÉ»ãië#P×qhµ¥ËÚÛ–¿[…ÐE' ¾ãm‹<òã½oŽCh+èê'Û{;ºV¾+N@¨û„Â2;¬î^4Y ýR(oìp.mßrìÖëÚîEHº¦Ónµ}=k)Èú‹ÆwBD؃‰Ÿ‚¾
ôÔÎ.Ï’‰5
I also have the same error, my HOMEDRIVE and HOMEPATH seem to be correct, However, when i type bash my WSL is start not the msys2 bash. I also have msys path in my environment variables so I can use certain packages natively so this could also be causing issues. Any suggestions?
Forgive me if someone already answered this, but from what I understand, it did exactly what it was told to. Your original image was mostly grey and black, so the two colors it chose to downsize to were grey and black. It doesn't matter if you set it to "L" or "RGB", since you gave it a predominantly grey and black image. As the other comment mentioned, you can create a very small image where the desired black & white palette is encoded into a minimal number of pixels, and pass this to the quantize method.
Hi this is the chatgpt version you might like this as well
private Rectangle GetCellBounds(int col, int row)
{
int x = tlp_tra_actual.GetColumnWidths().Take(col).Sum();
int y = tlp_tra_actual.GetRowHeights().Take(row).Sum();
int w = tlp_tra_actual.GetColumnWidths()[col];
int h = tlp_tra_actual.GetRowHeights()[row];
return new Rectangle(x, y, w, h);
}
void tableLayoutPanel1_CellPaint(object sender, TableLayoutCellPaintEventArgs e)
{
e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
try
{
int row = e.Row;
var g = e.Graphics;
float radiusFactor = 1.4f; // 1.0 = original, >1 = bigger arc
Rectangle cellRect = GetCellBounds(0, row);
// Make radius bigger than cell min size * factor
int baseRadius = Math.Min(cellRect.Width, cellRect.Height);
int radius = (int)(baseRadius * radiusFactor);
using (GraphicsPath path = new GraphicsPath())
{
// Move starting point higher up (because arc is larger)
path.StartFigure();
path.AddLine(cellRect.Left, cellRect.Bottom - radius, cellRect.Left, cellRect.Bottom);
path.AddLine(cellRect.Left, cellRect.Bottom, cellRect.Left + radius, cellRect.Bottom);
// Bigger arc, starts at bottom and sweeps up to left
path.AddArc(
cellRect.Left, // arc X
cellRect.Bottom - radius, // arc Y
radius, // arc width
radius, // arc height
90, 90);
path.CloseFigure();
using (Brush brush = new SolidBrush(Color.FromArgb(150, Color.DarkBlue)))
{
g.FillPath(brush, path);
}
}
}
catch
{
}
}
If an optional Core Data property has a default value set in the model editor, then:
Core Data never stores nil for that property — it immediately populates new objects with the default value.
That means even if you never explicitly set it, reading it will return the default (e.g., 0), not nil.
valueForKey: will also return an NSNumber with that default value, not nil.
How to allow nil detection:
Leave it as optional.
After that, Core Data will store nil if you don’t set a value.
Now you can detect nil using valueForKey: or by declaring it as an NSNumber *.
Best practice is to use single quote all the time :
ORG1_PASSWORD='$orgOne12345'
ORG2_PASSWORD='$orgTwo180000'
ORG3_PASSWORD='ORG_Admin123'
With no quotes or double quotes the variables will be interpreted in most cases (when interpreted using bash).
Escaping each characters is too verbose and you have to think about it and do it properly each time you change the password.
References :
function test<T extends string>(arr: T[], callback: (get: (key: T) => string) => void): Promise<void> {
return Promise.resolve();
}
test(['a', 'b', 'c'], (get) => {
get('a'); //works
get('d'); // compiler failure
});
It's no perfect solution, since crontab works in months, not in weeks, but the pattern I'd suggest is:
0 3 */14 * *, which executes a job on every 14th-day (at 3 AM) (i.e. 14. and 28.), which is close to bi-weekly, but since most months are 30 or 31 days long, you actually have: An execution on the 14th, 2 weeks pass, another execution, 2 weeks + 2-3 days pass, another execution, then exactly 2 weeks pass, etc.
If it has to be exactly 14 days apart, it could be a bit more tricky.
from PIL import Image, ImageEnhance
import requests
from io import BytesIO
# Load your image (update the path if needed)
base_image = Image.open("Screenshot_20250814_101245.jpg").convert("RGBA")
# Load CapCut logo (transparent PNG from web)
logo_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/6b/CapCut_Logo.svg/512px-CapCut_Logo.svg.png"
response = requests.get(logo_url)
logo_image = Image.open(BytesIO(response.content)).convert("RGBA")
# Resize logo to medium size (15% of image width)
base_width, base_height = base_image.size
logo_scale = 0.15
new_logo_width = int(base_width * logo_scale)
aspect_ratio = logo_image.height / logo_image.width
new_logo_height = int(new_logo_width * aspect_ratio)
logo_resized = logo_image.resize((new_logo_width, new_logo_height), Image.LANCZOS)
# Set opacity to 60%
alpha = logo_resized.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(0.6)
logo_resized.putalpha(alpha)
# Position logo in bottom-right corner
position = (base_width - new_logo_width - 10, base_height - new_logo_height - 10)
# Paste logo onto original image
combined = base_image.copy()
combined.paste(logo_resized, position, logo_resized)
# Save the result
combined.save("edited_with_capcut_logo.png")
print("✅ Saved as 'edited_with_capcut_logo.png'")
Looks like the issue’s not with react-export-excel itself but with how npm is trying to grab one of its dependencies over SSH from GitHub. Your network or firewall is probably blocking port 22, which is why it’s timing out.
I’d switch Git from SSH to HTTPS so it can bypass that restriction:
git config --global url."https://github.com/".instead of [email protected]:
Then try installing again.
If it still gives you trouble, you might just want to replace react-export-excel, it’s pretty outdated. I’ve had better luck using the xlsx + file saver combo, and it’s actively maintained.
Could this problem be solved by anyone?
Based on @Pete Becker's answer, I decided to use the following lock-less method: Prepare the output in a std::stringstream and send it to std::cerr in one (expected to be atomic) call.
#include <iostream>
#include <sstream>
[...]
std::stringstream lineToPrint;
lineToPrint << " Hello " << " World " << std::endl;
std::cerr << lineToPrint.str();
There are (at least) two ways you could go about it, seeing that the column structure is identical in the two files.
You could use a Read Range activity on the source Excel file to copy, and an Append Range activity on the destination file. Both of these activities need to be in an Excel Process Scope container.
Another way to go about it could be to read both Excel files (Read Range) and use a Merge Data Table activity to merge the two, before using a Write Range activity to write the entirety back to the destination file.
Best Regards
Soren
That is the expected behaviour. The line apex.item("P1_ERROR_FLAG").setValue("ERROR"); sets the value of the page item on the client side. Observe the network tab in the browser console - there will not be communication with the server when this happens. The value gets sent to the client in any of the following cases:
The post does not say when this code executes but I would create a dynamic action on change of P1_ERROR_FLAG that has an action of execute serverside code, items to submit set to P1_ERROR_FLAG and code NULL;. This will submit that page item to the server.
There might be better solutions for your use case but then please provide more info (as much as possible ) about how the page is set up: at what point do you need the P1_ERROR_FLAG value and how is it used ?
After switch from 11g to 12c, I use Altova XMLSpy.
Here is a video saying how to do it:
https://www.youtube.com/watch?v=piVbWtChd6I
And one more nice feature - XSLT / XQuery Back-mapping in Altova XMLSpy:
https://www.youtube.com/watch?v=lK1EDLbxxyo
While Writing this question I fiddled around some more and found a solution, but since I haven't found a similar question with a working answer so far, I decided to post this question anyway, including the answer - I hope that's ok.
For some reason, setting the environment variables using solr.in.sh doesn't work. However, setting them via compose's environment:, works just fine, so just adjusting this block to
environment:
ZK_HOST: [SELF-IP]:2181
SOLR_OPTS: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 -Djetty.host=[SELF-IP]
SOLR_TIMEZONE: Europe/Berlin
SOLR_HOST: [SELF-IP]
worked out sufficiently, no host-mode required.
# Construct the path to the PyQt6 plugins directory
# pyqt6_plugins_path = '/opt/python-venv/venv-3.11/lib/python3.11/site-packages/PyQt6/Qt6/plugins'
pyqt6_plugins_path = os.path.join(sys.prefix, 'lib', f'python{sys.version_info.major}.{sys.version_info.minor}', 'site-packages', 'PyQt6', 'Qt6', 'plugins')
# Set QT_PLUGIN_PATH to include both the PyQt6 plugins and the system Qt plugins
os.environ['QT_PLUGIN_PATH'] = f'{pyqt6_plugins_path}:/usr/lib/qt6/plugins'
# Set the Qt Quick Controls style for Kirigami to prevent the "Fusion" warning
os.environ["QT_QUICK_CONTROLS_STYLE"] = "org.kde.desktop"
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
# Add the system QML import path
engine.addImportPath("/usr/lib/qt6/qml")
.btn.disabled,
.btn[disabled],
fieldset[disabled] .btn {
cursor: not-allowed;
...
}
Posting an answer if anyone has this exact problem - kudos to @Grismar in comments
Setting ssl_verify_client optional_no_ca; will allow the handshake to complete and $ssl_client_verify will be set to FAILED:unable to verify the first certificate which is what I wanted to achieve. It will still work as before when the client has no cert at all (ssl_client_verify is set to NONE)
You probably want
.CreatedSinceElapsed time since the image was created
"Certified keyword translation services in Doha, Qatar — perfect for businesses, websites, and marketing. Fast, accurate, and culturally adapted to enhance your global reach. Learn more: https://eztranslationservice.com/"
When you talk about production and testing, then I would assume, you would maintain two seperate instances of your service side-by-side. One for testing and one for production. That's because you typically do not want to have to shut down your production application just for testing a new version.
So I would start two instances, one with TEST_MODE and one with PRODUCTION set. You could do that by running your python script twice, you'll probably want to create two batch files that first set the correct ENV variables and then run the frontend and backend scripts. Depending on those two ENV variables, you set a different database URL as well as a different frontend URL.
i face same issue. I use SQLCMD or BCP method to convert the file as UTF-8. Please see my SP below for details
ALTER PROCEDURE [wmwhse1].[SP_CUSTOMER_GetLoadDataLastHourEmail]
@StorerKeys NVARCHAR(500) = 'XYZ',
@EmailTo NVARCHAR(255) = '[email protected]',
@EmailSubject NVARCHAR(255) = '[PROD] CUSTOMER - Load Data Report Hourly'
AS
BEGIN
SET NOCOUNT ON;
DECLARE @FileName NVARCHAR(255);
DECLARE @FilePath NVARCHAR(500);
DECLARE @EmailBody NVARCHAR(MAX);
DECLARE @CurrentDateTime NVARCHAR(50);
DECLARE @HtmlTable NVARCHAR(MAX);
DECLARE @RecordCount INT;
DECLARE @BcpCommand NVARCHAR(4000);
BEGIN TRY
-- Generate timestamp for filename
SET @CurrentDateTime = REPLACE(REPLACE(CONVERT(NVARCHAR(50), GETDATE(), 120), '-', ''), ':', '');
SET @CurrentDateTime = REPLACE(@CurrentDateTime, ' ', '_');
SET @FileName = 'LoadDataReport_' + @CurrentDateTime + '.csv';
-- Set file path - ensure this directory exists and has write perHELLOsions
SET @FilePath = 'C:\temp\' + @FileName;
-- Get data for HTML table and record count
DECLARE @TempTable TABLE (
storerkey NVARCHAR(50),
MANIFEST NVARCHAR(50),
EXTERNALORDERKEY2 NVARCHAR(100),
LOADSTOP_EDITDATE DATETIME
);
INSERT INTO @TempTable
EXEC [wmwhse1].[SP_CUSTOMER_GetLoadDataLastHourData] @StorerKeys = @StorerKeys;
SELECT @RecordCount = COUNT(*) FROM @TempTable;
PRINT 'Records found in temp table: ' + CAST(@RecordCount AS NVARCHAR(10));
-- Only proceed if we have data
IF @RecordCount > 0
BEGIN
-- Create a global temp table for BCP export
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
CREATE TABLE ##TempLoadData (
storerkey NVARCHAR(50),
MANIFEST NVARCHAR(50),
EXTERNALORDERKEY2 NVARCHAR(100),
LOADSTOP_EDITDATE VARCHAR(50) -- Changed to VARCHAR for consistent formatting
);
INSERT INTO ##TempLoadData
SELECT
storerkey,
MANIFEST,
EXTERNALORDERKEY2,
CONVERT(VARCHAR(50), LOADSTOP_EDITDATE, 120)
FROM @TempTable;
PRINT 'Global temp table created with ' + CAST(@@ROWCOUNT AS NVARCHAR(10)) + ' records';
-- Method 1: Try SQLCMD approach first (more reliable than BCP for this use case)
SET @BcpCommand = 'sqlcmd -S' + @@SERVERNAME + ' -d SCPRD -E -Q "SET NOCOUNT ON; SELECT ''storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE''; SELECT storerkey + '','' + ISNULL(MANIFEST,'''') + '','' + ISNULL(EXTERNALORDERKEY2,'''') + '','' + LOADSTOP_EDITDATE FROM ##TempLoadData ORDER BY LOADSTOP_EDITDATE DESC" -o "' + @FilePath + '" -h -1 -w 8000';
PRINT 'Executing SQLCMD: ' + @BcpCommand;
EXEC xp_cmdshell @BcpCommand;
-- Check if file was created and has content
DECLARE @CheckFileCommand NVARCHAR(500);
SET @CheckFileCommand = 'dir "' + @FilePath + '"';
PRINT 'Checking if file exists:';
EXEC xp_cmdshell @CheckFileCommand;
-- Alternative Method 2: If SQLCMD doesn't work, try BCP with fixed syntax
DECLARE @FileSize TABLE (output NVARCHAR(255));
INSERT INTO @FileSize
EXEC xp_cmdshell @CheckFileCommand;
-- If file is empty or doesn't exist, try BCP method
IF NOT EXISTS (SELECT 1 FROM @FileSize WHERE output LIKE '%' + @FileName + '%' AND output NOT LIKE '%File Not Found%')
BEGIN
PRINT 'SQLCMD failed, trying BCP method...';
-- Create CSV header
DECLARE @HeaderCommand NVARCHAR(500);
SET @HeaderCommand = 'echo storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE > "' + @FilePath + '"';
EXEC xp_cmdshell @HeaderCommand;
-- BCP data export to temp file
SET @BcpCommand = 'bcp "SELECT ISNULL(storerkey,'''') + '','' + ISNULL(MANIFEST,'''') + '','' + ISNULL(EXTERNALORDERKEY2,'''') + '','' + ISNULL(LOADSTOP_EDITDATE,'''') FROM ##TempLoadData ORDER BY LOADSTOP_EDITDATE DESC" queryout "' + @FilePath + '_data" -c -T -S' + @@SERVERNAME + ' -d SCPRD';
PRINT 'Executing BCP: ' + @BcpCommand;
EXEC xp_cmdshell @BcpCommand;
-- Append data to header file
DECLARE @AppendCommand NVARCHAR(500);
SET @AppendCommand = 'type "' + @FilePath + '_data" >> "' + @FilePath + '"';
EXEC xp_cmdshell @AppendCommand;
-- Clean up temp file
SET @AppendCommand = 'del "' + @FilePath + '_data"';
EXEC xp_cmdshell @AppendCommand;
END
-- Final file check
PRINT 'Final file check:';
EXEC xp_cmdshell @CheckFileCommand;
END
ELSE
BEGIN
-- Create empty CSV with headers only
DECLARE @EmptyFileCommand NVARCHAR(500);
SET @EmptyFileCommand = 'echo storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE > "' + @FilePath + '"';
EXEC xp_cmdshell @EmptyFileCommand;
PRINT 'Created empty CSV file with headers only';
END
-- Build HTML table (same as before)
SET @HtmlTable = '
<style>
table { border-collapse: collapse; width: 100%; font-family: Arial, sans-serif; }
th { background-color: #4CAF50; color: white; padding: 12px; text-align: left; border: 1px solid #ddd; }
td { padding: 8px; border: 1px solid #ddd; }
tr:nth-child(even) { background-color: #f2f2f2; }
tr:hover { background-color: #f5f5f5; }
.summary { background-color: #e7f3ff; padding: 10px; margin: 10px 0; border-left: 4px solid #2196F3; }
</style>
<div class="summary">
<strong>Report Summary:</strong><br/>
Generated: ' + CONVERT(NVARCHAR(50), GETDATE(), 120) + '<br/>
Storer Keys: ' + @StorerKeys + '<br/>
Time Range: Last 1 hour<br/>
Total Records: ' + CAST(@RecordCount AS NVARCHAR(10)) + '<br/>
<span style="color: green;"><strong>File Encoding: UTF-8</strong></span>
</div>
<table>
<thead>
<tr>
<th>Storer Key</th>
<th>Manifest</th>
<th>External Order Key</th>
<th>Load Stop Edit Date</th>
</tr>
</thead>
<tbody>';
-- Add table rows
IF @RecordCount > 0
BEGIN
SELECT @HtmlTable = @HtmlTable +
'<tr>' +
'<td>' + ISNULL(storerkey, '') + '</td>' +
'<td>' + ISNULL(MANIFEST, '') + '</td>' +
'<td>' + ISNULL(EXTERNALORDERKEY2, '') + '</td>' +
'<td>' + CONVERT(NVARCHAR(50), LOADSTOP_EDITDATE, 120) + '</td>' +
'</tr>'
FROM @TempTable
ORDER BY LOADSTOP_EDITDATE DESC;
END
SET @HtmlTable = @HtmlTable + '</tbody></table>';
-- Handle case when no data found
IF @RecordCount = 0
BEGIN
SET @HtmlTable = '
<div class="summary">
<strong>Report Summary:</strong><br/>
Generated: ' + CONVERT(NVARCHAR(50), GETDATE(), 120) + '<br/>
Storer Keys: ' + @StorerKeys + '<br/>
Time Range: Last 1 hour<br/>
<span style="color: orange;"><strong>No records found for the specified criteria.</strong></span>
</div>';
END
-- Create email body
SET @EmailBody = 'Please find the Load Data Report for the last hour below and attached as UTF-8 encoded CSV.
' + @HtmlTable + '
<br/><br/>
<p style="font-size: 12px; color: #666;">
This is a system generated email, please do not reply.<br/>
CSV file is encoded in UTF-8 format.
</p>';
-- Send email with HTML body and UTF-8 CSV attachment
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'HELLO',
@recipients = @EmailTo,
@subject = @EmailSubject,
@body = @EmailBody,
@body_format = 'HTML',
@file_attachments = @FilePath;
-- Clean up
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
-- Optionally delete the file after sending
DECLARE @DeleteCommand NVARCHAR(500);
SET @DeleteCommand = 'del "' + @FilePath + '"';
EXEC xp_cmdshell @DeleteCommand;
PRINT 'Email sent successfully with UTF-8 CSV attachment: ' + @FileName;
PRINT 'Records processed: ' + CAST(@RecordCount AS NVARCHAR(10));
END TRY
BEGIN CATCH
-- Clean up in case of error
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
DECLARE @ErrorState INT = ERROR_STATE();
PRINT 'Error occurred while sending email: ' + @ErrorMessage;
RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH
END
I think I figure out the solution myself, I want to post the solution for windows local machine here; thanks for @Wayne 's suggestion that "It's just that making it effectively work can be super tricky depending on your system".
I open the powershell of windows, type the following command
[System.IO.File]::WriteAllBytes("$env:TEMP\ctrl-d.txt", @(4))
Then I open the file using command:(Open a folder,type the following command in the address field)
%TEMP%\ctrl-d.txt
then ctrl-A + ctrl-C to copy the character to clipboard in windows system
paste that character into the interactive mode prompt
I am able to get back to ipdb normal mode instead of interactive mode.
You can see the result in the picture:
Did you use the correct mediaID/mediaType for reels and videos?
I'll share my own basic CLI for posting images and videos to Instagram, and you can see how the use the media type correctly.
You can check the code snippet.
func createMediaContainer() (string, error) {
endpoint := fmt.Sprintf("https://graph.instagram.com/%s/%s/media", config.Version, config.IGID)
data := url.Values{}
if mediaType == "video" {
data.Set("media_type", "REELS")
data.Set("video_url", mediaURL)
} else {
data.Set("image_url", mediaURL)
}
data.Set("caption", caption)
data.Set("access_token", config.Token)
resp, err := http.PostForm(endpoint, data)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
if resp.StatusCode != 200 {
return "", fmt.Errorf("API error: %s", string(body))
}
return parseID(body), nil
}
You can try adding --noweb argument.
As an astronomy buff, I can offer the size of a star vs. the lifetime of a star as an example of something which as input increases, output decreases:
Our sun should burn for about 10 billion years (and we're about halfway there), but a star 10 times more massive will burn about 3,000 times brighter and live only about 20-25 million years. I'm not sure of the exact big-O or little-o equations, but astronomers have known this for some time: more massive stars burn exponentially brighter (and therefore live much less time) than smaller stars.
Think of a hotel front desk. You walk up and say,
“Please send someone to clean my room.”
You don’t specify who that is because it depends on which housekeeper is working.
The front desk checks the schedule, like a vtable.
The person assigned at that moment goes to clean your room.
In dynamic dispatch, your code makes a request to call a function. At runtime, the program checks which specific implementation to run before sending it to do the job.
I had this problem in several versions Now I have this problem in version 2024.3.3
I just cleared the cache and the problem was solved:
File > Invalidate Caches.. > Clear file system cache and Local History (check) > INVALIDATE AND RESTART
1. Install the Capacitor AdMob Plugin:
bash
npm install @capacitor-community/admob
npx cap sync
2. Configure AdMob Plugin: Add the following to your capacitor.config.ts:
typescript
import { CapacitorConfig } from '@capacitor/core';
const config: CapacitorConfig = {
plugins: {
AdMob: {
appId: 'ca-app-pub-xxxxxxxx~xxxxxxxx', // Your AdMob App ID
testingDevices: ['YOUR_DEVICE_ID'], // For testing
},
},
};
Step 1: Initialize AdMob in your React app
typescript
import { AdMob, AdMobNative, NativeAdOptions } from '@capacitor-community/admob';
// Initialize AdMob
await AdMob.initialize({
initializeForTesting: true, // Remove in production
});
Step 2: Create Native Ad Component
typescript
import React, { useEffect, useRef } from 'react';
const NativeAdComponent: React.FC = () => {
const adRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const loadNativeAd = async () => {
const options: NativeAdOptions = {
adId: 'ca-app-pub-xxxxxxxx/xxxxxxxx', // Your Native Ad Unit ID
adSize: 'MEDIUM_RECTANGLE',
position: 'CUSTOM',
margin: 0,
x: 0,
y: 0,
};
try {
await AdMobNative.createNativeAd(options);
await AdMobNative.showNativeAd();
} catch (error) {
console.error('Error loading native ad:', error);
}
};
loadNativeAd();
return () => {
AdMobNative.hideNativeAd();
};
}, []);
return <div ref={adRef} id="native-ad-container" />;
};
Step 3: Platform-specific Configuration
For iOS (ios/App/App/Info.plist):
xml
<key>GADApplicationIdentifier</key>
<string>ca-app-pub-xxxxxxxx~xxxxxxxx</string>
<key>SKAdNetworkItems</key>
<array>
<!-- Add SKAdNetwork IDs -->
</array>
For Android (android/app/src/main/AndroidManifest.xml):
xml
<meta-data
android:name="com.google.android.gms.ads.APPLICATION_ID"
android:value="ca-app-pub-xxxxxxxx~xxxxxxxx"/>
typescript
const CustomNativeAd: React.FC = () => {
const [nativeAdData, setNativeAdData] = useState(null);
useEffect(() => {
const loadCustomNativeAd = async () => {
try {
const result = await AdMobNative.loadNativeAd({
adUnitId: 'ca-app-pub-xxxxxxxx/xxxxxxxx',
adFormat: 'NATIVE_ADVANCED',
});
setNativeAdData(result.nativeAd);
} catch (error) {
console.error('Failed to load native ad:', error);
}
};
loadCustomNativeAd();
}, []);
return (
<div className="native-ad-container">
{nativeAdData && (
<>
<img src={nativeAdData.icon} alt="Ad Icon" />
<h3>{nativeAdData.headline}</h3>
<p>{nativeAdData.body}</p>
<button onClick={() => AdMobNative.recordClick()}>
{nativeAdData.callToAction}
</button>
</>
)}
</div>
);
};
Test Thoroughly: Use test ad unit IDs during development
Error Handling: Always implement proper error handling for ad loading failures
User Experience: Ensure native ads blend seamlessly with your app's design
Performance: Load ads asynchronously to avoid blocking the UI
Compliance: Follow Google AdMob policies for native ad implementation
If you're facing challenges with complex React implementations or need expert guidance for your mobile app development project, consider partnering with a professional reactjs app development company.
You do need to sort the attributes in a DER-encoded SET. This is critical for CAdES which computes the hash of SignedAttributes by re-assembling them as an explicit SET before computing the digest. If you didn’t sort them the same way, the hashes won’t match.
Fork flutter_udid on GitHub.
In your fork, change jcenter() → mavenCentral().
Reference your fork in pubspec.yaml:
dependencies:
flutter_udid:
git:
url: paste your forked repo url
ref: main
now your project will now use your modified fork rather than the original package from pub.dev
Kindly go through this link, https://aws.amazon.com/blogs/storage/connect-snowflake-to-s3-tables-using-the-sagemaker-lakehouse-iceberg-rest-endpoint/.
if you need a process in an active user session, but you want to run it remotely, then you will have to use a bundle of scheduler and events. You will need to create a task separately in the task scheduler, so that the task always runs in an active user session. Set events as a trigger and you can set up a filter for a keyword. Next, you will only need to remotely trigger the event/log entry.
make sure your alpine.js is not load twice. if you are using livewire version 3 you dont need to load alpine anywhere else.
you can check your livewire version in composer.json. in my case its look like this
{
"require": {
"livewire/livewire": "^3.6.4",
},
}
This may help others:
I had very similar outputs (almost the same ones) after runing all three commands below:
service docker start
systemctl status docker.service
journalctl -xe
Nothing along stackoverflow worked. I reviewd the step-by-step installation on my WSL2 Ubuntu (standard) (env: Windows 11 Pro) and I realized I'd made:
sudo nano /etc/wsl.conf
and inserted this in "wsl.conf" file:
[boot]
command = service docker start
After deleted that from "wsl.conf" everything worked well.
# Further reduce image size to ensure it fits on the PDF page
max_width = 6 * inch
max_height = 8 * inch
doc = SimpleDocTemplate(output_pdf_path, pagesize=letter)
story = [RLImage(input_image_path, width=max_width, height=max_height)]
doc.build(story)
output_pdf_path
I have a doubt in this diagram that is how can all the angles between the H- atoms are 109°. The complete is angle is 360° but here the three angles between the H- atoms are 109° and if we add three 109° we get 327° but the angle should come 360° and so I have doubt that how so it comes
Fixed the issue by just deleted the node_modules and package_lock.json and did npm i inside the project folder.
I suspect that when you are setting the Icon in "displayNotification()" could be the culprit
private fun displayNotification(){
...
// set the small icon using the Icon created from bitmap (API 23+)
builder.setSmallIcon(Icon.createWithBitmap(bitmap))
.setContentTitle("Simple Notification")
...
}
it looks like your createWithBitmap() function could be repeatedly called by the builder, maybe assign Icon.createWithBitmap() to a variable outside of the builder?
ie:
val bitmapIcon = Icon.createWithBitmap(bitmap)
builer.setSmallIcon(bitmapIcon)
[im guessing a little here, im still learning kotlin]
There is also the fact that MainApplication.kt is the default entry point of the program, without making changes in the Manifest file to point the "launcher" code.
let dataPath: String = "MyDB"
//var db_uninitialized: OpaquePointer? // 👈 Reference #0 -> Never used. Will fail if called.
func openDatabase() -> OpaquePointer? {
let filePath = try! FileManager.default.url ( for: .documentDirectory , in: .userDomainMask , appropriateFor: nil , create: false ).appendingPathComponent ( dataPath )
var db: OpaquePointer? = nil
if sqlite3_open ( filePath.path , &db ) != SQLITE_OK {
debugPrint ( "Cannot open DB." )
return nil
}
else {
print ( "DB successfully created." )
return db
}
}
// 👇 Reference #1 -> PRIMARY KEY column must be `unique.` `Unique` means no other rows in the column contain an equal value.
func createStockTable() {
let createTableString = """
CREATE TABLE IF NOT EXISTS Stocks (
id INTEGER PRIMARY KEY,
stockName STRING,
status INT,
imgName STRING,
prevClose DOUBLE,
curPrice DOUBLE,
yield DOUBLE,
noShares INT,
capitalization DOUBLE,
lastUpdated String
);
"""
var createTableStatement: OpaquePointer? = nil
if sqlite3_prepare_v2 ( initialized_db , createTableString , -1 , &createTableStatement , nil ) == SQLITE_OK {
if sqlite3_step ( createTableStatement ) == SQLITE_DONE {
print ( "Stock table is created successfully" )
} else {
print ( "Stock table creation failed." )
}
sqlite3_finalize ( createTableStatement )
}
sqlite3_close ( initialized_db ) // 👈 Reference #2 -> Connection lost and will need to be recreated for insertion function.
}
// 👇 Reference #3 -> extension on `OpaquePointer?` declared.
extension OpaquePointer? {
func insertStocks ( id: Int, stockName: String, status: Int, imgName: String, prevClose: Double, curPrice: Double, yield: Double, noShares: Int, capitalization: Double, lastUpdated: String) -> Bool {
let insertStatementString = "INSERT INTO Stocks (id, stockName, status, imgName, prevClose, curPrice, yield, noShares, capitalization, lastUpdated) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);"
var insertStatement: OpaquePointer? = nil
if sqlite3_prepare_v2 ( self , insertStatementString , -1, &insertStatement , nil ) == SQLITE_OK {
sqlite3_bind_int ( insertStatement , 1 , Int32 ( id ) )
sqlite3_bind_text ( insertStatement , 2 , ( stockName as NSString ).utf8String , -1 , nil )
sqlite3_bind_int ( insertStatement , 3 , Int32(status))
sqlite3_bind_text ( insertStatement , 4 , ( imgName as NSString ).utf8String, -1 , nil )
sqlite3_bind_double ( insertStatement , 5 , Double ( prevClose ) )
sqlite3_bind_double ( insertStatement , 6 , Double ( curPrice ) )
sqlite3_bind_double ( insertStatement , 7 , Double ( yield ) )
sqlite3_bind_int64 ( insertStatement , 8 , Int64 ( noShares ) )
sqlite3_bind_double ( insertStatement , 9 , Double ( capitalization ) )
sqlite3_bind_text ( insertStatement , 10 , ( lastUpdated as NSString ).utf8String, -1, nil)
if sqlite3_step ( insertStatement) == SQLITE_DONE {
print("Stock Entry was created successfully")
sqlite3_finalize(insertStatement)
return true
} else {
print("Stock Entry Insert failed")
return false
}
} else {
print("INSERT Statement has failed")
return false
}
}
}
/// 👇 Reference #5 -> Change `id` input from `1` to `Int.random(in: 0...10000)` to satisfy `unique` constraint. Note this could still fail if the generated integer already exist in the `id` column.
func addStocks() {
let result = initialized_db.insertStocks ( id: Int.random(in: 0...10000), stockName: "Tulsa Motors", status: 1, imgName: "Tulsa_logo", prevClose: 125.18, curPrice: 125.18, yield: 0.025, noShares: 14357698, capitalization: .pi , lastUpdated: "2025-05-01 17:00:00")
print ( "Database insertion result: \( result )" )
}
var initialized_db = openDatabase() // 👈 Reference #6 -> Captured instance of Database connection.
createStockTable() // 👈 Reference #7 -> Connection closed at the end of function.
initialized_db = openDatabase() // 👈 Reference #8 -> Connection reestablished.
addStocks() // 👈 Reference #9 -> Dont forget to close your connection, finalize, and clean up.
If you wanted to make the id column autoincrement, like Douglas W. Palme said, you can omit it from you bind function, adjust your column indices... I would also recommend you declare it in you `creationStatement` for completeness sake.
let createTableString = """
CREATE TABLE IF NOT EXISTS Stocks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
stockName STRING,
status INT,
imgName STRING,
prevClose DOUBLE,
curPrice DOUBLE,
yield DOUBLE,
noShares INT,
capitalization DOUBLE,
lastUpdated STRING
);
"""
Best regards.
Select the Node2D and not the Sprite2D. I solved it finally.
GridDB does not natively support querying nested JSON values directly within a STRING column. The current capabilities of GridDB for handling JSON payloads stored as strings do not include querying nested elements within the JSON. The approach you are currently using—selecting all rows, parsing the JSON in Java, and then filtering manually—is the typical method for dealing with JSON data stored as strings in GridDB.
If you require the ability to query nested JSON values efficiently, you may need to consider a different database system that has built-in support for JSON data types and allows querying of nested JSON elements directly, such as MongoDB. MongoDB, for example, provides powerful querying capabilities for JSON documents, including the ability to query nested fields.
In summary, with GridDB, you will need to handle JSON parsing and filtering within your application code, as native querying of nested JSON is not supported.
=LET(pvt,PIVOTBY(B2:B27,A2:A27,B2:B27,LAMBDA(x,ROWS(x)),,0,,0),sort,MATCH({"","Jan","Feb","Mar","Apr","May"},TAKE(pvt,1)),CHOOSECOLS(pvt,sort))
any ideas why my UITabBar looks like having a background on iOS 26? build on Xcode 26 beta.
So I figured out what the issue was, simply removing the error from the end of the method signature seems to solve my problem, and the Method is now accessible from other methods across my package.
The question is about the behavior of Swift’s Array.max when the array is of type Double and contains one or more NaN values alongside valid numeric values.
The official documentation simply states that max() returns “the sequence’s maximum element” and that it returns nil if the array is empty. However, this leaves ambiguity in cases where the concept of “maximum” is mathematically undefined, such as when NaN is involved, since any comparison with NaN is false.
The user points out that if “maximum” means “an element x such that x >= y for every other y in the array,” then an array containing a NaN technically doesn’t have a maximum element at all. That raises the question: should max() return nil, NaN, or something else in this scenario?
Through experimentation, the user observed that Swift’s implementation seems to ignore NaN values when determining the maximum and instead returns the maximum of the remaining non-NaN numbers. This approach is practical, but it’s not explicitly documented, which makes developers unsure whether it’s a guaranteed behavior or just an implementation detail that could change.
The user is seeking official Apple documentation that explicitly confirms this handling of NaN in Array.max, rather than having to infer it from experiments.
The initialization vector is a salt - it's a random string that makes different encryption sessions uncorrelated and this makes it more difficult to crack the encryption/decryption key. By definition, the salt/iv influences the output of the encryption algorithm, and also the output of the decryption algorithm. By changing the IV in the middle of an encrypt + decrypt, you are essentially corrupting your decryption process and you were fortunate not to get total garbage as a result.
While the initialization vector does NOT have to be secret, it DOES have to be different for every new piece of plaintext that is encrypted, otherwise the ciphertexts will all be correlated and an attacker will have an easier time cracking your encryption key.
here is an update on adding a new elide widget text box in-between the other 2.
import tkinter as tk
root = tk.Tk()
root.title("Testing widgets for Elide")
# create 'line number' text box
line_text = tk.Text(root, wrap="none", width=5, insertwidth=0) # don't want the cursor to appear here
line_text.pack(fill="y", side="left")
# added > create elide button text box
line_textR = tk.Text(root, wrap="none", width=2, insertwidth=0) # don't want the cursor to appear here
line_textR.pack(fill="y", side="left")
# create 'code' text box
text_box = tk.Text(root, wrap="none")
text_box.pack(fill="both", expand=True, side="left")
# add a tag to line number text box (need text to be at the right side)
line_text.tag_configure("right", justify="right")
# add some text into the text boxes
for i in range(13):
line_text.insert("end", "%s \n" % (i+1)) # add line numbers into line text box (now on the left side)
line_text.tag_add("right", "1.0", "end")
for i in range(13):
text_box.insert("end", "%s \n" % ("some text here at line number #" + str(i+1))) # add some text int the main text box (now on the right side)
for i in range(13):
line_textR.insert("end", " \n") # add blank space on each line for the elide widget text box _ this allows for widget placement by line number (now in the middle)
# add button to use as elide +/- (inside text boxes? _ not sure which widget is correct (button, label, image)?
elide_button = tk.Button(line_textR, text="-")
line_textR.window_create("11.0", window=elide_button) # *** test ***
root.mainloop()
To make the two series of samples independent and uncorrelated, I suggest you randomly select samples from the second series (mix their order up) to make the two series uncorrelated.
from pdf2image import convert_from_path
# Convertir el PDF a imágenes JPG
pages = convert_from_path(file_path, dpi=200)
jpg_paths = []
for i, page in enumerate(pages):
jpg_path = f"/mnt/data/linea_tiempo_fosa_mariana_page\_{i+1}.jpg"
page.save(jpg_path, "JPEG")
jpg_paths.append(jpg_path)
jpg_paths
I think you want the in operator, but I haven't tested the following:
for (let idx = 0; idx < collection.length; idx++) {
const localIdx = idx;
if (idx in collection) {
collection[localIdx] = collection[localIdx] * 2;
}
}
What resource group are you providing in the command to create the deployment?
az deployment group create --resource-group
This is the scope the deployment will be created it. You cannot create resources in 2 different resource groups in the same file just by using scope.
You should create a separate bicep file for creating the resources in the second RG and use that resource group name when running the command to create the deployment.
Although this is not exactly what you are asking for, Azure DevOps supports adding - retryCountOnTaskFailure to a task that allows you to configure retries if the task fails.
Microsoft doc reference - https://learn.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml#number-of-retries-if-task-failed