did you manage to solve the issue?
Using control names in a switch statement works but if a control name(s) should ever be changed in the life of your app the switch statement silently breaks.
This is one solution...
if (sender == rdoA) {
} else if (sender == rdoB) {
} else if (sender == rdoC) {
} else if (sender == rdoD) {
}
Unfortunately, Prophet is completely built around Stan. Setting mcmc_samples=0
turns off the full Bayesian inference, but also the simple optimization is run by Stan. I am afraid it is either talking to your administrator or not using Prophet (or Gloria, respectively). Good luck!
Use ffmpeg!
brew install ffmpeg
ffmpeg -i my-file.aiff my-file.mp3
Try updating the below package (or other logback packages) to more updated version
ch.qos.logback:logback-core
For me, I got this exact issue when I upgraded the current spring boot version to 3.5.4 from 3.3.5.
When I updated my logback-core package to 1.5.18 the problem was gone
The RecursiveCharacterTextSplitter is not very good at getting nice overlaps. However it will always try to overlap the chunks, if possible. The overlap is decided by the size of the splits of the separator. So if the first separator does a good split (all chunks less than chunk_size), the others will not be used to get a finer overlap split.
For example:
You have chunk_size=500 and overlap=50
The first separator splits the document into 7 chunks with the following lengths:
[100, 300, 100, 100, 100]
The chunks will then be merged together until the next split in line will exceed the limit.
So chunks 0, 1, and 2 will be merged together to form a final document chunk. Since the last chunk in the merge (chunk 2), has size 100 which is bigger than the allowed overlap it will not be included in the next merge of chunks 3 and 4 thus giving no overlap.
Just uninstall VSCode and delete Code folder from appdata
Uninstall VS-Code
Open Win+R and write %APPDATA%
Locate "\Code" folder and Delete it
Reinstall VS-Code
Note: You need to reinstall extensions (if needed), it is a good idea to note down installed extensions before uninstall vs-code
Posting this in case anybody stumbles upon this, it took me several hours to find the solution.
My public video_url had double // in it, which was the issue. Also content-type needs to be video/mp4.
e.g. https://example.com//video.mp4 - notice the double /
I reviewed the Pulsar code in question, it logs the exception directly before completing the future with it.
consumerSubscribedFuture.thenRun(() -> readerFuture.complete(reader)).exceptionally(ex -> {
log.warn("[{}] Failed to get create topic reader", topic, ex);
readerFuture.completeExceptionally(ex);
return null;
There's probably little you can do here.
Just a comment and is the slowest thing on earth but a LOCAL STATIC FORWARD_ONLY Cursor can always be running just take it in chunks e.g. select top (whatever). You know set a task to run every x minutes that keeps your busy DB pruned and logged to file if you need it etc..
There is this YCSB fork that can generate a sine wave.
A sine wave can be produced using these properties:
strategy = sine | constant (defaults to constant)
amplitude = Int
baseTarget = Int
period = Int
The question is quite old now, but I had a similar issue where I needed to display the map in a specific aspect ratio.
What helped me was adding the following styles to the containing element for the google map:
#map {
width: 100%;
aspect-ratio: 4 / 3;
}
Hope this might help some of you. :)
Query the knowledge graph (read-only) to see if the relevant data for the given input already exists.
If data exists → use it directly for evaluation or downstream tasks.
If data does not exist → use an external LLM to generate the output.
Optionally evaluate the LLM output.
Insert the new output into the knowledge graph so it can be reused in the future.
This approach is standard for integrating LLMs with knowledge graphs: the graph acts as a persistent store of previously generated or validated knowledge, and the LLM is used only when needed.
The key benefits of this approach:
Efficiency: You avoid regenerating data already stored.
Traceability: All generated outputs can be stored and tracked in the graph.
Scalability: The graph gradually accumulates verified knowledge over time.
this one works with me on Android 16
lunch aosp_x86_64-ap2a-eng
I tried using this on my iphone simulator on mac and it doesn't work, but using the expo app on my iphone and reading the QR code worked, maybe is the version of the iphone you are using on the simulator.
The model predicts the same output due to overfitting to a trivial solution on the small dataset. Low pre-training diversity occurs because the frozen base model provides static features which leads to a narrow output range from the randomly initialized final layers. Please refer to the gist for your reference.
same exact problem, did you fix this?
## Resolution
Replace `data.aws_region.current.name` with `data.aws_region.current.id`:
```hcl
# Updated - no deprecation warning
"aws:SourceArn" = "arn:aws:logs:${data.aws_region.current.id}:${data.aws_caller_identity.current.account_id}:*"
Consider updating the deprecation warning message to be more explicit:
I finally managed to fix the problem: instead of defining in bloc listener where to navigate according to state in auth or app page, i called the widgets on blocBuilder and changed from a switch case with state.runtimeType to an if clause with each possible state, and called the initial event whenever authOrAppState changes
I am having the same issue. I have burned through 5 motherboards so far while developing a PCIe card with Xilinx MPSoC. Some reddit user says the Xilinx PCIe would not cooperate with certain motherboards and it will corrupt the BIOS. It's hard to imagine the BIOS would get bad by running a PCIe device. But apparently they fixed the motherboard by manually flashing the BIOS again. I haven't seen any discussion on this issue on Xilinx forums.
You can check out this answered StackOverflow question to understand how event propagations in forms work:
Why does the form submit event fire when I have stopped propagation at the submit button level?
But here's a short explanation of why you're having this issue:
So the behavior you’re seeing is intentional. submit
is not a “normal” bubbling event like click
. In the HTML specification, submit
is dispatched by the form element itself when a submission is triggered (by pressing Enter in a text input, clicking a submit button, or calling form.requestSubmit()
), not as a result of bubbling from a descendant.
When you call:
input.dispatchEvent(new Event("submit", { bubbles: true }));
on a descendant element inside a <form>
, the event may be retargeted or only seen by the form, depending on the browser’s implementation. That’s why you only see the FORM submit
log. The event isn’t flowing “naturally” through the DOM from the span the way a click
event would.
Cheers! Hope this helps, happy building!
Replace tfds.load('imdb_reviews/subwords8k', ...)
with tfds.load('imdb_reviews', ...)
, then manually create a tokenizer using SubwordTextEncoder.build_from_corpus
on the training split, map this tokenizer over the dataset with tf.py_function
to encode the text into integer IDs, and finally use padded_batch
to handle variable-length sequences before feeding them into your model.
I know it's an old post and kind of unrelated, but as described above, justify-content
doesn't have baseline
value. If you're as stupid as me then you probably mistaken it with justify-items
, which indeed have (first/last) baseline
values as specified in CSS Box Alignment Module Level 3 ("Value: ... | <baseline-position> | ...".
Replace this -
borderRadius: BorderRadius.circular(isCenter ? imageRadius : 0)
With -
borderRadius: BorderRadius.circular(imageRadius)
as you are applying radius only if the image is center image.
If any of the cell in range have formula then SUM will show 0.
This commands show you list of aliases keytool -list -v -cacerts -storepass changeit | sed -n 's/^Alias name: //p'
I believe this is a bug that OpenAPI Generator ignores the default responses. It's discussed here on their github. You can patch the generator as suggested in the thread or use someone's fork. I ended up writing a simple script that would go through the Collibra OpenAPI specs and add 200
response alongside the default ones and generated the client from the patched json.
Why shouldn't Account-Id be a company Name?
Otherwise you also can set the Account-Name
https://support.pendo.io/hc/en-us/articles/21326198721563-Choose-IDs-and-metadata
https://support.pendo.io/hc/en-us/articles/9430394517403-Configure-for-Feedback
To host Data API Builder on-prem in Windows Server 2022 with IIS and SQL Server, first update dab-config.json
for production (enable REST, set CORS, auth, etc.) and run it so it listens externally with dab start --host 0.0.0.0 --port 5000
, then publish it with dotnet publish -c Release -r win-x64 --self-contained true
and install the generated EXE as a Windows Service using sc create
(or NSSM) so it runs automatically, and optionally configure IIS as a reverse proxy by installing URL Rewrite + Application Request Routing, creating a rule to forward all requests to http://localhost:5000
, and letting IIS handle SSL, host bindings, and firewall exposure, ensuring you lock down CORS and authentication before making it publicly reachable.
Years ago, I needed a simple, reliable logger with zero dependencies for a Test Automation Framework. I ended up building one and just published it on GitHub and NuGet as ZeroFrictionLogger - MIT licensed and open source.
It looks like it could be a good fit for your question.
Can you please check the version of spring boot you are currently using.
registerLazyIfNotAlreadyRegistered occurs in spring boot 3.3+ while you might have some was introduced in Spring Boot 3.2+.
If you have mixed versions — for example, Spring Boot 3.3.x pulling in Spring Data JPA 3.3.x but another library in your project bringing an older Spring Data JPA (like 3.1.x or 2.x)
try running
mvn dependency:tree | grep spring-data-jpa
mvn dependency:tree | grep spring-data-commons
if u see 2 versions of jpa , then remove the older version
mvn dependency:tree -Dverbose | grep "spring-data-jpa"
The optimize-autoloader
addition to composer.json works for custom classes like Models, but not for vendor classes like Carbon.
This can be achieved by publishing Tinker's config file, and adding Carbon as an alias.
Run php artisan vendor:publish --provider="Laravel\Tinker\TinkerServiceProvider"
to generate `config/tinker.php` if the file doesn't already exist.
Edit the alias
array in this file to alias carbon:
'alias' => [
'Carbon\Carbon' => 'Carbon',
]
Then, run composer dump-autoload
.
Tinker should now automatically alias Carbon, allowing Carbon::now()
to work without using the full namespace.
Please take a reference from this; you get a better understanding.
https://docs.oracle.com/javase/tutorial/jdbc/basics/sqlxml.html
$user = read-host "user"
$pass = read-host "pass " -AsSecureString
$cred = New-Object System.Management.Automation.PSCredential -ArgumentList $user, $pwd
Find-Package -Name "$packageName" -ProviderName NuGet -AllVersions -Credential $cred
Back in 2017, I was looking for modern-day Database ORM 😄
<!-- معرض الصور -->
<div class="gallery">
<img src="image1.jpg" onclick="openModal(0)">
<img src="image2.jpg" onclick="openModal(1)">
<img src="image3.jpg" onclick="openModal(2)">
</div>
<!-- نافذة عرض الصورة -->
<div id="modal" style="display:none;">
<button onclick="prevImage()">←</button>
<img id="modal-img" src="">
<button onclick="nextImage()">→</button>
<button onclick="closeModal()">إغلاق</button>
</div>
تعديل تطبيق
Change
src: url('../fonts/Rockness.ttf');
to
src: url('../fonts/Rockness.ttf') format('truetype');
The solution was to update resharper to the newest version 2025.2 (Build 252.0.20250812.71120 built on 2025-08-12)
content of e.g. logging.h
#include <QDebug>
#define LOG(Verbosity) q##Verbosity().noquote().nospace()
then use it like:
#include "logging.h"
QString abc = "bla";
LOG(Info) << abc;
LOG(Debug) << abc;
cheers
Thilo
This may not be the solution for your issue.
However, in my case I found that the issue was to do with an incompatible sql.js
version that had been updated.
I found that versioning sql.js in my package.json to ~1.12.0 resolved this issue for me.
"sql.js": "~1.12.0",
first click on field, next write text
local username = splash:select('input[name=username]')
username:mouse_click() -- click on field
splash:send_text('foobar') -- write text
There is not <Head> component in App router, so this would work only with Pages router.
I think what you are looking for is JFrog Curation.
What Windows calls OwnerAuthFull
is the base64-encoded lockout password (I believe this is terminology inherited from TPM 1.2). You can test it with tpm2_dictionarylockout -c -p file:key.bin
, where key.bin
contains that password after decoding it with base64 -d
.
The TPM2 owner password (owner / storage hierarchy) is unset, you can verify that with this command:
# tpm2_getcap properties-variable | grep AuthSet
ownerAuthSet: 0
endorsementAuthSet: 0
lockoutAuthSet: 1
For me, works using dot as before index:
-DHttpServerConfig.sourceFilePath.0=qwerty -DHttpServerConfig.sourceFilePath.1=asdfg
If you set equal indexes, the last value overrides the previous.
I actually ran into the exact same struggle recently when trying to get Google Picker working in a Streamlit app (though I didn’t try it with ngrok). I’m more of a Python person too, so mixing in the JavaScript OAuth flow was… let’s just say “fun.” 😅
In the end, I decided to build a Streamlit component for it — wraps the Google Picker API and works with a normal OAuth2 flow in Python.
It supports:
Picking files or folders from Google Drive
Multi-select
Filtering by file type/MIME type
Returns st.file_uploader
-style Python UploadedFile
objects you can read right away
You can install it with:
pip install streamlit-google-picker
Might save you from fighting with the JavaScript side — and even if I didn’t try it with ngrok, there’s no reason it shouldn’t work.
You can also check the right way to setup the google cloud settings : Demo + setup google cloud guide (Medium)
I used this condition and it works, though the dialog does not show up in disambiguation:
intents.size() > 1 && intents.contains('intentName1') && intents.contains('intentName2')
As of August 2025 the location of the cl.exe is in the path:
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.44.35207\bin\Hostx64\x64
You have to install Visual Studio from here: https://visualstudio.microsoft.com/
Remember to select Desktop Development with C++ when installing otherwise cl.exe will not exist.
It's a bit of a workaround, but I think this answer might help: https://stackoverflow.com/a/58273676/15545685
Applied to your case:
library(tidyverse)
library(ggforce)
dat <- data.frame(
date = seq(as.Date('2024-01-01'),
as.Date('2024-12-31'),
'1 day'),
value = rnorm(366, 10, 3)
)
p0 <- dat |>
ggplot(aes(x = date, y = value)) +
geom_point() +
labs(x = NULL,
y = 'Value') +
theme_bw(base_size = 16) +
scale_x_date(date_labels = '%b %d') +
facet_zoom(xlim = c(as.Date("2024-06-01"), as.Date("2024-06-20")))
p1 <- p0 +
scale_x_date(breaks = seq(as.Date('2024-01-01'),
as.Date('2024-12-31'),
'1 month'),
limits = c(as.Date('2024-01-01'),
as.Date('2024-12-31')),
date_labels = '%b\n%Y')
gp0 <- ggplot_build(p0)
gp1 <- ggplot_build(p1)
k <- gp1$layout$layout$SCALE_X[gp1$layout$layout$name == "x"]
gp1$layout$panel_scales_x[[k]]$limits <- gp1$layout$panel_scales_x[[k]]$range$range
k <- gp1$layout$layout$PANEL[gp1$layout$layout$name == "x"]
gp1$layout$panel_params[[k]] <- gp0$layout$panel_params[[k]]
gt1 <- ggplot_gtable(gp1)
grid::grid.draw(gt1)
Replace with this
select[required] {
padding: 0;
background: transparent;
color: transparent;
border: none;
}
How about not using awk at all?
echo 255.255.192.0 | sh -c 'IFS=.; read m ; n=0; for o in $m ; do n=$((($n<<8) + $o)); done; s=$((1<<31)); p=0; while [ $(($n & $s)) -gt 0 ] ; do s=$((s>>1)) p=$((p+1)); done; echo $p'
Annotation: IFS
is the input field separator; it splits the netmask into individual octets. So $m
becomes the set of four numbers. The next variable $n
is going to be the 32 bit number of the netmask, constructed by going over each octed $o
(the iterator in the first loop) and shifting it 8 bits left. The second loop uses $s
(the 'shifter') as a 32 bit number with only a single 1
bit, starting at position 32; while it shifts down it is compared (bitwise &
) to the mask and the return value $p
increases every time there is a 1
until there is no more match (so it stops at the first 0
bit).
discord.py (which nextcord is wrapper of) has resolved this issues in https://github.com/Rapptz/discord.py/issues/10207, i believe and update of the nextcord package or the discord.py package should resolve this issue
You can make modelValue
a discriminated union key by range
so TS can infer the correct type automatically. For example:
type Props =
| { range: true, modelValue: [number, number] }
| { range?: false, modelValue: number };
Then use that type in your defineProps
and defineEmits
so no casting is needed.
Maybe the unique
parameter in the column annotation could help you?
#[ORM\Column(unique: true, name: "app_id", type: Types::BIGINT)]
private ?int $appId = null;
If the user is supposed to be unique, maybe a OneToOne
Relation could be better than a ManyToOne
. I am pretty sure using OneToOne
will also generate a unique index in your migration for you, even without the unique
parameter.
#[ORM\OneToOne(inversedBy: 'userMobileApp', cascade: ['persist', 'remove'])]
#[ORM\JoinColumn(name: "user_id, "nullable: false)]
private ?User $user = null;
After adding separate configurations for the two web applications, I'm encountering an issue with the custom binding for the second web app. I already have a setup for custom binding and DNS for the first web app.
Here's a lazy solution as compared to above answers here: my XCode project threw me this error as an iPad was connected for test. I tried deleting Deriveddata, re-starting XCode etc, but none of these helped. I ended up abandoning that project and creating the new one. The new project does not throw this error anymore.
If you are on Mac and have been running your script like this python my-script.py
, you might want to try running it with sudo
. I spent 30 minutes debugging correct code before realizing that "requests" needs sudo permissions
I have the same question. Unfortunately, both links in the highlighted answer are now outdated. Does anyone have newer info on this?
For the condition I tried:
#intentName1 && #intentName2
intents.contains('intentName1') && intents.contains('intentName2')
intents.values.contains('intentName1') && intents.values.contains('intentName2')
The first two didn't throw an error but the dialog was just skipped when I entered an utterance in which both intents were recognized. The final one threw an error:
SpEL evaluation error: Expression [intents.size() > 0 && intents.values.contains('intentName1') && intents.values.contains('intentName2') && @entityName] converted to [intents.size() > 0 && intents.values.contains('intentName1') && intents.values.contains('intentName2') && entities['entityName']?.value] at position 73: EL1008E: Property or field 'values' cannot be found on object of type 'CluIntentResponseList' - maybe not public or not valid?
In the plugin developed for OPA https://github.com/EOEPCA/keycloak-opa-plugin , it seems that the Admin UI was customised (see (js/apps/admin-ui/src/clients/authorization/policy).
you have to manully allow location access from phone setting by going to the phone setting > privacy and security > location > Safari/anyother brower.
Got it — sounds like you’re trying to bypass the whole “training” aspect and just hard-code your decision logic in a tree-like form. In that case, sklearn’s DecisionTreeClassifier isn’t really the right tool, since it’s built to learn from data. A custom tree structure, like the Node
class example given, would give you more control and let you directly define each condition without needing any training step. This way, you still get the decision-tree behavior, but exactly how you’ve designed it.
Hi, try SELECT concat(REPLICATE('0', 16 - LEN(NAG)),NAG) as NAG16
where NAG is yourVarcharField and 16 is lenght that you need
Seems, this behaviour of SvelteKit is not replicable in NextJS. There is a similar feature in NextJS and it is called prerendering. But, prerendering only works for static pages.
For dynamic pages, the server components start to render on the server only after the page is navigated to. If needed, a suspense boundary can be used as a placeholder (which is displayed instantly) before the whole page is rendered.
With respect to wasted bandwidth of fetching links when it comes into viewport, @oscar-hermoso's answer of switching the prefetch option to on hover works.
After using both the frameworks, it feels as if SvelteKit is really thought out when it comes to frameworks. NextJS relies on CDN to make the site fast. SvelteKit uses a simple, but clever trick. So, when end users use the site, the SvelteKit version feels much faster.
For me I added this line to the top of requirements.txt`
file. I was able to install the packages successfully.
torch==2.2.2
I don't know whether this is any help, but I fixed a similar issue justt by putting "" arround the echo line
I would recommend taking a look at the Mongoose Networking library.
It's a lightweight open-source networking library designed specifically for embedded systems. It includes full support for most networking protocols, including MQTT. With the MQTT support, you can build not just a client, but also a MQTT broker. The library is highly portable and has support for a wide variety of microcontrollers and platforms. It can run on a baremetal environment or with an RTOS like FreeRTOS or Zephyr.
Mongoose has a solid documentation of all its features and usages and you can find an example of building a MQTT simple server here.
Heads up: I am part of the Mongoose development team. Hope this solves your problem!
vmess://eyJ0eXBlIjoibm9uZSIsImhvc3QiOiJtZy1pbm5vY2VudC1hZGp1c3Qtc2hhcGUudHJ5Y2xvdWRmbGFyZS5jb20iLCJoZWFkZXJUeXBlIjoiIiwicG9ydCI6Ijg0NDMiLCJuZXQiOiJ3cyIsImlkIjoiM2RhNWQyN2YtMDhkMC00MDc4LWY4OTAtY2Y5NTBlY2IxNzA4IiwidiI6IjIiLCJzZXJ2aWNlTmFtZSI6Im5vbmUiLCJzZWVkIjoiIiwiZnJhZ21lbnQiOiIiLCJtb2RlIjoiIiwicGF0aCI6IlwvIiwidGxzIjoidGxzIiwiYWxwbiI6IiIsImFkZCI6IjEwNC4xNi4xMjUuMzIiLCJwcyI6IvCfjqxDQU1CT0RJQS1NRVRGT05FLfCfh7jwn4esIPCfpYAiLCJmcCI6ImNocm9tZSIsInNuaSI6Im1nLWlubm9jZW50LWFkanVzdC1zaGFwZS50cnljbG91ZGZsYXJlLmNvbSIsImRldmljZUlEIjoiIiwiYWlkIjoiMCIsImV4dHJhIjoiIn0=
Adding one more suggestion for Kube-clusters
(Future reader may look ):
Check your clock is skewed or not by using these commands :chronyc tracking
or Timedatectl status
If Leap Status is Not Synchronised
then do NTP Synchronization.
The official SQLMesh documentation and source code currently focus on Slack and email as supported notification targets. There is no out-of-the-box support for Microsoft Teams mentioned.
However, since Teams supports incoming webhooks similar to Slack, you can likely adapt the Slack webhook configuration for Teams by:
Creating an Incoming Webhook in your Teams channel.
Using that webhook URL in your SQLMesh notification configuration.
Formatting the payload to match Teams' https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using
Try configuring a Teams webhook and test sending a JSON payload from SQLMesh using the same mechanism as Slack.
What I’d do in UiPath is pretty straightforward:
Read the new data from the first Excel file using Read Range (under the Modern Excel activities).
Read the existing data from the target sheet in the second file.
Combine them — put the new data above the existing data in a single DataTable using the Merge datatable activity or just by using newsatatable.clone and importing rows in the right order.
Write the merged table back to the target sheet using Write Range.
Basically, you’re replacing the sheet content with “new rows first, then old rows” instead of trying to physically insert rows at the top in Excel, which UiPath doesn’t handle directly.
reset someClass.someProp = null; before the second render, or use beforeEach to mock and reset state properly.
With WIndows 11, it was in addition necessary for me to add sqlservr.exe to the allowed firewall apps.
I followed those instructions:
https://docs.driveworkspro.com/Topic/HowToConfigureWindowsFirewallForSQLServer
Thanks for all,
Harald
I realize that this thread is really old, but perhaps it's still alive enough to find someone to help me out. On a daily basis, I have different documents in which I need to highlight certain words (they change with every doc). I'd like an easy way to tell Google Docs to highlight the words in yellow each time. The previous posts seem to provide some info, but I can't figure out how to get any of them to run properly. I'd like to envision a Google Docs "template" to which I would copy the text. Then, I could run some type of script based on the keywords (even if I have to manually edit the script each time) to highlight the words. I could then copy that altered text into the final document. But, I need step by step instructions on how to get this
Try to wrap it in CDATA construct.
An example present in the below link shows the case:
<![CDATA[
Within this Character Data block I can
use double dashes as much as I want (along with <, &, ', and ")
*and* %MyParamEntity; will be expanded to the text
"Has been expanded" ... however, I can't use
the CEND sequence. If I need to use CEND I must escape one of the
brackets or the greater-than sign using concatenated CDATA sections.
]]>
More to read:
What does <![CDATA[]]> in XML mean?
4 0 obj
(Identity)
endobj
5 0 obj
(Adobe)
endobj
8 0 obj
<<
/Filter /FlateDecode
/Length 178320
/Length1 537536
/Type /Stream
>>
stream
xœì½
`TÅõ?>sï¾²ÙÜlÞ›lBnØ$’„‡7 „$ä "f“Ý$‹›‡»^""`DEÅ7"ZŠÔ.Ñ"*Eªˆ/ªV¢¥–úªµˆV…üÏÌÜ »
ZõÛþþí÷»ssÎçœyÏ™™3s³y ŒŠ¦B-¥õ•ÓsÿnCºXBI»ÊŠKæD;ÊŠ«JÖ>Pü‰
¦é¥eåûŸã(RÍz!Í ÓkkêW7ôªaŸž3½¾±øódý8„åùHíª©ÏÉ»ãië#P×qhµ¥ËÚÛ–¿[…ÐE' ¾ãm‹<òã½oŽCh+èê'Û{;ºV¾+N@¨û„Â2;¬î^4Y ýR(oìp.mßrìÖëÚîEHº¦Ónµ}=k)Èú‹ÆwBD؃‰Ÿ‚¾
ôÔÎ.Ï’‰5
I also have the same error, my HOMEDRIVE and HOMEPATH seem to be correct, However, when i type bash my WSL is start not the msys2 bash. I also have msys path in my environment variables so I can use certain packages natively so this could also be causing issues. Any suggestions?
Forgive me if someone already answered this, but from what I understand, it did exactly what it was told to. Your original image was mostly grey and black, so the two colors it chose to downsize to were grey and black. It doesn't matter if you set it to "L" or "RGB", since you gave it a predominantly grey and black image. As the other comment mentioned, you can create a very small image where the desired black & white palette is encoded into a minimal number of pixels, and pass this to the quantize method.
Hi this is the chatgpt version you might like this as well
private Rectangle GetCellBounds(int col, int row)
{
int x = tlp_tra_actual.GetColumnWidths().Take(col).Sum();
int y = tlp_tra_actual.GetRowHeights().Take(row).Sum();
int w = tlp_tra_actual.GetColumnWidths()[col];
int h = tlp_tra_actual.GetRowHeights()[row];
return new Rectangle(x, y, w, h);
}
void tableLayoutPanel1_CellPaint(object sender, TableLayoutCellPaintEventArgs e)
{
e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
try
{
int row = e.Row;
var g = e.Graphics;
float radiusFactor = 1.4f; // 1.0 = original, >1 = bigger arc
Rectangle cellRect = GetCellBounds(0, row);
// Make radius bigger than cell min size * factor
int baseRadius = Math.Min(cellRect.Width, cellRect.Height);
int radius = (int)(baseRadius * radiusFactor);
using (GraphicsPath path = new GraphicsPath())
{
// Move starting point higher up (because arc is larger)
path.StartFigure();
path.AddLine(cellRect.Left, cellRect.Bottom - radius, cellRect.Left, cellRect.Bottom);
path.AddLine(cellRect.Left, cellRect.Bottom, cellRect.Left + radius, cellRect.Bottom);
// Bigger arc, starts at bottom and sweeps up to left
path.AddArc(
cellRect.Left, // arc X
cellRect.Bottom - radius, // arc Y
radius, // arc width
radius, // arc height
90, 90);
path.CloseFigure();
using (Brush brush = new SolidBrush(Color.FromArgb(150, Color.DarkBlue)))
{
g.FillPath(brush, path);
}
}
}
catch
{
}
}
If an optional Core Data property has a default value set in the model editor, then:
Core Data never stores nil for that property — it immediately populates new objects with the default value.
That means even if you never explicitly set it, reading it will return the default (e.g., 0), not nil.
valueForKey: will also return an NSNumber with that default value, not nil.
How to allow nil detection:
Leave it as optional.
After that, Core Data will store nil if you don’t set a value.
Now you can detect nil using valueForKey: or by declaring it as an NSNumber *.
Best practice is to use single quote all the time :
ORG1_PASSWORD='$orgOne12345'
ORG2_PASSWORD='$orgTwo180000'
ORG3_PASSWORD='ORG_Admin123'
With no quotes or double quotes the variables will be interpreted in most cases (when interpreted using bash).
Escaping each characters is too verbose and you have to think about it and do it properly each time you change the password.
References :
function test<T extends string>(arr: T[], callback: (get: (key: T) => string) => void): Promise<void> {
return Promise.resolve();
}
test(['a', 'b', 'c'], (get) => {
get('a'); //works
get('d'); // compiler failure
});
It's no perfect solution, since crontab works in months, not in weeks, but the pattern I'd suggest is:
0 3 */14 * *, which executes a job on every 14th-day (at 3 AM) (i.e. 14. and 28.), which is close to bi-weekly, but since most months are 30 or 31 days long, you actually have: An execution on the 14th, 2 weeks pass, another execution, 2 weeks + 2-3 days pass, another execution, then exactly 2 weeks pass, etc.
If it has to be exactly 14 days apart, it could be a bit more tricky.
from PIL import Image, ImageEnhance
import requests
from io import BytesIO
# Load your image (update the path if needed)
base_image = Image.open("Screenshot_20250814_101245.jpg").convert("RGBA")
# Load CapCut logo (transparent PNG from web)
logo_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/6b/CapCut_Logo.svg/512px-CapCut_Logo.svg.png"
response = requests.get(logo_url)
logo_image = Image.open(BytesIO(response.content)).convert("RGBA")
# Resize logo to medium size (15% of image width)
base_width, base_height = base_image.size
logo_scale = 0.15
new_logo_width = int(base_width * logo_scale)
aspect_ratio = logo_image.height / logo_image.width
new_logo_height = int(new_logo_width * aspect_ratio)
logo_resized = logo_image.resize((new_logo_width, new_logo_height), Image.LANCZOS)
# Set opacity to 60%
alpha = logo_resized.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(0.6)
logo_resized.putalpha(alpha)
# Position logo in bottom-right corner
position = (base_width - new_logo_width - 10, base_height - new_logo_height - 10)
# Paste logo onto original image
combined = base_image.copy()
combined.paste(logo_resized, position, logo_resized)
# Save the result
combined.save("edited_with_capcut_logo.png")
print("✅ Saved as 'edited_with_capcut_logo.png'")
Looks like the issue’s not with react-export-excel itself but with how npm is trying to grab one of its dependencies over SSH from GitHub. Your network or firewall is probably blocking port 22, which is why it’s timing out.
I’d switch Git from SSH to HTTPS so it can bypass that restriction:
git config --global url."https://github.com/".instead of [email protected]:
Then try installing again.
If it still gives you trouble, you might just want to replace react-export-excel, it’s pretty outdated. I’ve had better luck using the xlsx + file saver combo, and it’s actively maintained.
Could this problem be solved by anyone?
Based on @Pete Becker's answer, I decided to use the following lock-less method: Prepare the output in a std::stringstream and send it to std::cerr in one (expected to be atomic) call.
#include <iostream>
#include <sstream>
[...]
std::stringstream lineToPrint;
lineToPrint << " Hello " << " World " << std::endl;
std::cerr << lineToPrint.str();
There are (at least) two ways you could go about it, seeing that the column structure is identical in the two files.
You could use a Read Range activity on the source Excel file to copy, and an Append Range activity on the destination file. Both of these activities need to be in an Excel Process Scope container.
Another way to go about it could be to read both Excel files (Read Range) and use a Merge Data Table activity to merge the two, before using a Write Range activity to write the entirety back to the destination file.
Best Regards
Soren
That is the expected behaviour. The line apex.item("P1_ERROR_FLAG").setValue("ERROR");
sets the value of the page item on the client side. Observe the network tab in the browser console - there will not be communication with the server when this happens. The value gets sent to the client in any of the following cases:
The post does not say when this code executes but I would create a dynamic action on change of P1_ERROR_FLAG that has an action of execute serverside code, items to submit set to P1_ERROR_FLAG and code NULL;
. This will submit that page item to the server.
There might be better solutions for your use case but then please provide more info (as much as possible ) about how the page is set up: at what point do you need the P1_ERROR_FLAG value and how is it used ?
After switch from 11g to 12c, I use Altova XMLSpy.
Here is a video saying how to do it:
https://www.youtube.com/watch?v=piVbWtChd6I
And one more nice feature - XSLT / XQuery Back-mapping in Altova XMLSpy:
https://www.youtube.com/watch?v=lK1EDLbxxyo
While Writing this question I fiddled around some more and found a solution, but since I haven't found a similar question with a working answer so far, I decided to post this question anyway, including the answer - I hope that's ok.
For some reason, setting the environment variables using solr.in.sh
doesn't work. However, setting them via compose's environment:
, works just fine, so just adjusting this block to
environment:
ZK_HOST: [SELF-IP]:2181
SOLR_OPTS: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 -Djetty.host=[SELF-IP]
SOLR_TIMEZONE: Europe/Berlin
SOLR_HOST: [SELF-IP]
worked out sufficiently, no host-mode required.
# Construct the path to the PyQt6 plugins directory
# pyqt6_plugins_path = '/opt/python-venv/venv-3.11/lib/python3.11/site-packages/PyQt6/Qt6/plugins'
pyqt6_plugins_path = os.path.join(sys.prefix, 'lib', f'python{sys.version_info.major}.{sys.version_info.minor}', 'site-packages', 'PyQt6', 'Qt6', 'plugins')
# Set QT_PLUGIN_PATH to include both the PyQt6 plugins and the system Qt plugins
os.environ['QT_PLUGIN_PATH'] = f'{pyqt6_plugins_path}:/usr/lib/qt6/plugins'
# Set the Qt Quick Controls style for Kirigami to prevent the "Fusion" warning
os.environ["QT_QUICK_CONTROLS_STYLE"] = "org.kde.desktop"
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
# Add the system QML import path
engine.addImportPath("/usr/lib/qt6/qml")
.btn.disabled,
.btn[disabled],
fieldset[disabled] .btn {
cursor: not-allowed;
...
}
Posting an answer if anyone has this exact problem - kudos to @Grismar in comments
Setting ssl_verify_client optional_no_ca;
will allow the handshake to complete and $ssl_client_verify
will be set to FAILED:unable to verify the first certificate
which is what I wanted to achieve. It will still work as before when the client has no cert at all (ssl_client_verify
is set to NONE
)
You probably want
.CreatedSinceElapsed time since the image was created
"Certified keyword translation services in Doha, Qatar — perfect for businesses, websites, and marketing. Fast, accurate, and culturally adapted to enhance your global reach. Learn more: https://eztranslationservice.com/"
When you talk about production and testing, then I would assume, you would maintain two seperate instances of your service side-by-side. One for testing and one for production. That's because you typically do not want to have to shut down your production application just for testing a new version.
So I would start two instances, one with TEST_MODE and one with PRODUCTION set. You could do that by running your python script twice, you'll probably want to create two batch files that first set the correct ENV variables and then run the frontend and backend scripts. Depending on those two ENV variables, you set a different database URL as well as a different frontend URL.
i face same issue. I use SQLCMD or BCP method to convert the file as UTF-8. Please see my SP below for details
ALTER PROCEDURE [wmwhse1].[SP_CUSTOMER_GetLoadDataLastHourEmail]
@StorerKeys NVARCHAR(500) = 'XYZ',
@EmailTo NVARCHAR(255) = '[email protected]',
@EmailSubject NVARCHAR(255) = '[PROD] CUSTOMER - Load Data Report Hourly'
AS
BEGIN
SET NOCOUNT ON;
DECLARE @FileName NVARCHAR(255);
DECLARE @FilePath NVARCHAR(500);
DECLARE @EmailBody NVARCHAR(MAX);
DECLARE @CurrentDateTime NVARCHAR(50);
DECLARE @HtmlTable NVARCHAR(MAX);
DECLARE @RecordCount INT;
DECLARE @BcpCommand NVARCHAR(4000);
BEGIN TRY
-- Generate timestamp for filename
SET @CurrentDateTime = REPLACE(REPLACE(CONVERT(NVARCHAR(50), GETDATE(), 120), '-', ''), ':', '');
SET @CurrentDateTime = REPLACE(@CurrentDateTime, ' ', '_');
SET @FileName = 'LoadDataReport_' + @CurrentDateTime + '.csv';
-- Set file path - ensure this directory exists and has write perHELLOsions
SET @FilePath = 'C:\temp\' + @FileName;
-- Get data for HTML table and record count
DECLARE @TempTable TABLE (
storerkey NVARCHAR(50),
MANIFEST NVARCHAR(50),
EXTERNALORDERKEY2 NVARCHAR(100),
LOADSTOP_EDITDATE DATETIME
);
INSERT INTO @TempTable
EXEC [wmwhse1].[SP_CUSTOMER_GetLoadDataLastHourData] @StorerKeys = @StorerKeys;
SELECT @RecordCount = COUNT(*) FROM @TempTable;
PRINT 'Records found in temp table: ' + CAST(@RecordCount AS NVARCHAR(10));
-- Only proceed if we have data
IF @RecordCount > 0
BEGIN
-- Create a global temp table for BCP export
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
CREATE TABLE ##TempLoadData (
storerkey NVARCHAR(50),
MANIFEST NVARCHAR(50),
EXTERNALORDERKEY2 NVARCHAR(100),
LOADSTOP_EDITDATE VARCHAR(50) -- Changed to VARCHAR for consistent formatting
);
INSERT INTO ##TempLoadData
SELECT
storerkey,
MANIFEST,
EXTERNALORDERKEY2,
CONVERT(VARCHAR(50), LOADSTOP_EDITDATE, 120)
FROM @TempTable;
PRINT 'Global temp table created with ' + CAST(@@ROWCOUNT AS NVARCHAR(10)) + ' records';
-- Method 1: Try SQLCMD approach first (more reliable than BCP for this use case)
SET @BcpCommand = 'sqlcmd -S' + @@SERVERNAME + ' -d SCPRD -E -Q "SET NOCOUNT ON; SELECT ''storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE''; SELECT storerkey + '','' + ISNULL(MANIFEST,'''') + '','' + ISNULL(EXTERNALORDERKEY2,'''') + '','' + LOADSTOP_EDITDATE FROM ##TempLoadData ORDER BY LOADSTOP_EDITDATE DESC" -o "' + @FilePath + '" -h -1 -w 8000';
PRINT 'Executing SQLCMD: ' + @BcpCommand;
EXEC xp_cmdshell @BcpCommand;
-- Check if file was created and has content
DECLARE @CheckFileCommand NVARCHAR(500);
SET @CheckFileCommand = 'dir "' + @FilePath + '"';
PRINT 'Checking if file exists:';
EXEC xp_cmdshell @CheckFileCommand;
-- Alternative Method 2: If SQLCMD doesn't work, try BCP with fixed syntax
DECLARE @FileSize TABLE (output NVARCHAR(255));
INSERT INTO @FileSize
EXEC xp_cmdshell @CheckFileCommand;
-- If file is empty or doesn't exist, try BCP method
IF NOT EXISTS (SELECT 1 FROM @FileSize WHERE output LIKE '%' + @FileName + '%' AND output NOT LIKE '%File Not Found%')
BEGIN
PRINT 'SQLCMD failed, trying BCP method...';
-- Create CSV header
DECLARE @HeaderCommand NVARCHAR(500);
SET @HeaderCommand = 'echo storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE > "' + @FilePath + '"';
EXEC xp_cmdshell @HeaderCommand;
-- BCP data export to temp file
SET @BcpCommand = 'bcp "SELECT ISNULL(storerkey,'''') + '','' + ISNULL(MANIFEST,'''') + '','' + ISNULL(EXTERNALORDERKEY2,'''') + '','' + ISNULL(LOADSTOP_EDITDATE,'''') FROM ##TempLoadData ORDER BY LOADSTOP_EDITDATE DESC" queryout "' + @FilePath + '_data" -c -T -S' + @@SERVERNAME + ' -d SCPRD';
PRINT 'Executing BCP: ' + @BcpCommand;
EXEC xp_cmdshell @BcpCommand;
-- Append data to header file
DECLARE @AppendCommand NVARCHAR(500);
SET @AppendCommand = 'type "' + @FilePath + '_data" >> "' + @FilePath + '"';
EXEC xp_cmdshell @AppendCommand;
-- Clean up temp file
SET @AppendCommand = 'del "' + @FilePath + '_data"';
EXEC xp_cmdshell @AppendCommand;
END
-- Final file check
PRINT 'Final file check:';
EXEC xp_cmdshell @CheckFileCommand;
END
ELSE
BEGIN
-- Create empty CSV with headers only
DECLARE @EmptyFileCommand NVARCHAR(500);
SET @EmptyFileCommand = 'echo storerkey,MANIFEST,EXTERNALORDERKEY2,LOADSTOP_EDITDATE > "' + @FilePath + '"';
EXEC xp_cmdshell @EmptyFileCommand;
PRINT 'Created empty CSV file with headers only';
END
-- Build HTML table (same as before)
SET @HtmlTable = '
<style>
table { border-collapse: collapse; width: 100%; font-family: Arial, sans-serif; }
th { background-color: #4CAF50; color: white; padding: 12px; text-align: left; border: 1px solid #ddd; }
td { padding: 8px; border: 1px solid #ddd; }
tr:nth-child(even) { background-color: #f2f2f2; }
tr:hover { background-color: #f5f5f5; }
.summary { background-color: #e7f3ff; padding: 10px; margin: 10px 0; border-left: 4px solid #2196F3; }
</style>
<div class="summary">
<strong>Report Summary:</strong><br/>
Generated: ' + CONVERT(NVARCHAR(50), GETDATE(), 120) + '<br/>
Storer Keys: ' + @StorerKeys + '<br/>
Time Range: Last 1 hour<br/>
Total Records: ' + CAST(@RecordCount AS NVARCHAR(10)) + '<br/>
<span style="color: green;"><strong>File Encoding: UTF-8</strong></span>
</div>
<table>
<thead>
<tr>
<th>Storer Key</th>
<th>Manifest</th>
<th>External Order Key</th>
<th>Load Stop Edit Date</th>
</tr>
</thead>
<tbody>';
-- Add table rows
IF @RecordCount > 0
BEGIN
SELECT @HtmlTable = @HtmlTable +
'<tr>' +
'<td>' + ISNULL(storerkey, '') + '</td>' +
'<td>' + ISNULL(MANIFEST, '') + '</td>' +
'<td>' + ISNULL(EXTERNALORDERKEY2, '') + '</td>' +
'<td>' + CONVERT(NVARCHAR(50), LOADSTOP_EDITDATE, 120) + '</td>' +
'</tr>'
FROM @TempTable
ORDER BY LOADSTOP_EDITDATE DESC;
END
SET @HtmlTable = @HtmlTable + '</tbody></table>';
-- Handle case when no data found
IF @RecordCount = 0
BEGIN
SET @HtmlTable = '
<div class="summary">
<strong>Report Summary:</strong><br/>
Generated: ' + CONVERT(NVARCHAR(50), GETDATE(), 120) + '<br/>
Storer Keys: ' + @StorerKeys + '<br/>
Time Range: Last 1 hour<br/>
<span style="color: orange;"><strong>No records found for the specified criteria.</strong></span>
</div>';
END
-- Create email body
SET @EmailBody = 'Please find the Load Data Report for the last hour below and attached as UTF-8 encoded CSV.
' + @HtmlTable + '
<br/><br/>
<p style="font-size: 12px; color: #666;">
This is a system generated email, please do not reply.<br/>
CSV file is encoded in UTF-8 format.
</p>';
-- Send email with HTML body and UTF-8 CSV attachment
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'HELLO',
@recipients = @EmailTo,
@subject = @EmailSubject,
@body = @EmailBody,
@body_format = 'HTML',
@file_attachments = @FilePath;
-- Clean up
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
-- Optionally delete the file after sending
DECLARE @DeleteCommand NVARCHAR(500);
SET @DeleteCommand = 'del "' + @FilePath + '"';
EXEC xp_cmdshell @DeleteCommand;
PRINT 'Email sent successfully with UTF-8 CSV attachment: ' + @FileName;
PRINT 'Records processed: ' + CAST(@RecordCount AS NVARCHAR(10));
END TRY
BEGIN CATCH
-- Clean up in case of error
IF OBJECT_ID('tempdb..##TempLoadData') IS NOT NULL
DROP TABLE ##TempLoadData;
DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
DECLARE @ErrorState INT = ERROR_STATE();
PRINT 'Error occurred while sending email: ' + @ErrorMessage;
RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH
END
I think I figure out the solution myself, I want to post the solution for windows local machine here; thanks for @Wayne 's suggestion that "It's just that making it effectively work can be super tricky depending on your system".
I open the powershell of windows, type the following command
[System.IO.File]::WriteAllBytes("$env:TEMP\ctrl-d.txt", @(4))
Then I open the file using command:(Open a folder,type the following command in the address field)
%TEMP%\ctrl-d.txt
then ctrl-A + ctrl-C to copy the character to clipboard in windows system
paste that character into the interactive mode prompt
I am able to get back to ipdb normal mode instead of interactive mode.
You can see the result in the picture:
Did you use the correct mediaID/mediaType for reels and videos?
I'll share my own basic CLI for posting images and videos to Instagram, and you can see how the use the media type correctly.
You can check the code snippet.
func createMediaContainer() (string, error) {
endpoint := fmt.Sprintf("https://graph.instagram.com/%s/%s/media", config.Version, config.IGID)
data := url.Values{}
if mediaType == "video" {
data.Set("media_type", "REELS")
data.Set("video_url", mediaURL)
} else {
data.Set("image_url", mediaURL)
}
data.Set("caption", caption)
data.Set("access_token", config.Token)
resp, err := http.PostForm(endpoint, data)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
if resp.StatusCode != 200 {
return "", fmt.Errorf("API error: %s", string(body))
}
return parseID(body), nil
}
You can try adding --noweb argument.