You can full_join a tibble of columns with only NAs:
library(tidyverse)
mtcars %>%
as_tibble() %>%
full_join(tibble(mpg = NA, topspeed = NA)
# or if want to have all columns of another tibble
mtcars |>
select(cyl) |>
full_join(mtcars |> filter(FALSE))
I had a similar issue with One Drive which is also a cloud storage.
It was not syncing my latest files even after saving them.
So i removed my files from that cloud storage and stored in my local computer.
Most centralized crypto exchanges (like Binance, Coinbase, etc.) do not move crypto on the blockchain for each trade. Instead, they manage trades internally.
What Happens During a Trade
Let’s say:
You have ETH and want to trade it for ZETH.
Another person wants to trade ZETH for ETH.
When your trade is matched:
The exchange updates your balances in its internal system.
You get ZETH in your exchange account, and the other person gets ETH.
No actual blockchain transaction happens during this trade.
How the Exchange Takes a Cut
The exchange charges a trading fee (usually a small percentage like 0.1% or 0.2%).
For example:
If you trade 1 ETH and the fee is 0.2%, the exchange takes 0.002 ETH as a fee.
You receive the equivalent value of 0.998 ETH in ZETH.
This fee is deducted automatically during the internal balance update.
How do you deal with the condition race when refreshing the access_token?
In your answer, clicking on one of the controls triggers the label event, and then you change the value of the checkbox and edit the label manually. But I want the events/elements to be separate. Like the approach in 'Does work, but ugly'. Basically, I want the functionality from there, but using the look of the custom-control from bootstrap. But I think these classes are preventing me from achieving this. I think I need to copy the style. – zesaro CommentedFeb 20, 2024 at 14:38 Making labels for a form editable is a totally non-standard approach - the solution option to allow labels to be editable I present is ONE way to do that non-standard approach. Similar a "shadow" input might be another - I think you need to consider how to clearly separate a view label from an editable entity - SO you either want the label editable or you DON"T; and live with that choice you as a functional designer have made. FYIW Clicking a Label for a Checkbox to trigger the check/uncheck as I have here is the standard way labels work for those
My code works in all the user pools I’ve created. But it doesn’t in the production pool, without any clear reason I can find through the AWS console or CLI.
I have some suspects but no real evidence. If you run into a weird issue like this, try using the User ID (Sub)
as the username
instead of the email
. It should work either way, and it does in every other pool, but well... This workaround made it for me.
Can you delete the folder: ${user}/.fx
and try again?
Apologies for the confusion and for taking up your time with this—it was my own mistake. I really appreciate your help.
After all the investigation above, I finally discovered the true culprit: a project‑root .env
file that I had added for other tooling (e.g. to set JAVA_HOME
for a Processing‑like Python library py5
). That file contained something like:
# .env at the workspace root
JAVA_HOME=C:\Some\Java\Path
PATH=$JAVA_HOME$;$PATH
.env
and appends it to the existing environment, so your real PATH
(and PROGRAMFILES
, LOCALAPPDATA
, etc.) remain intact—PowerShell and CMD subprocesses work fine..env
and override the entire process environment. As a result:
PATH
becomes exactly C:\Some\Java\Path\bin;$PATH
(no Windows system paths)powershell.exe
can’t be found (or finds no inherited variables like PROGRAMFILES
)$env:…
lookups return empty stringsRemove or adjust the .env
so it doesn’t clobber your entire PATH
. For example, change it to extend rather than replace(change $VAR
to %VAR%
):
# .env — extend the existing PATH instead of overwriting it
JAVA_HOME=C:\Some\Java\Path
PATH=%JAVA_HOME%;%PATH%
After making one of these changes, PowerShell subprocesses under debugpy will once again inherit the full environment, and $env:PROGRAMFILES
, $env:LOCALAPPDATA
, etc. will be populated as expected.
@Daniel, I tried your OpenAPI spec file, and if I selected only 1~2 API, then it works fine. I think the problem is that the error message is not correct, the maximum of OpenAPI supported is around 100K
Then use the Pagination component in combination with a custom Select component. The Pagination component only implements basic pagination functionality, so it won't include a page size selection dropdown.
In my case I don't have a References Metadata, Is any way i can properly extract the References ?
I started looking in to it last week and am still looking for streamlining processes.
Here's what I've been doing:
:: Export from Rithum
:: Select & Export a sample Product from within Shopify Admin (left side: Home, Orders, Products > Export Product
:: Paste the Rithum info into the relevant Fields (there are many that will be cast aside)
:: or Rename the Rithum CSV Fields to match and delete the rest
Fyi: Shopify will find any images hosted on CloudFare.
Hope that's helps.
Please let me know if you've discovered any better methods.
Thanks and good luck
Jim
I feel your pain on this one—debugging Teams/Office add-ins can be super frustrating, especially when you're doing everything right and still getting these cryptic TLS errors. So let’s break this down in a human, no-BS way:
🔍 What’s actually going wrong? The key error here is:
nginx Copy Edit UNABLE_TO_GET_ISSUER_CERT_LOCALLY Which basically means:
The tool (in this case, @microsoft/teamsapp-cli) tried to make a secure HTTPS request but couldn't verify the certificate authority (CA) of the server it was trying to talk to. It doesn't trust the chain. 🧠 Why is this happening? Even though your dev certs (localhost.crt, etc.) are trusted locally for dev (hence no browser warnings), the Teams Toolkit CLI or Node environment (used by axios, etc.) may not trust those same certs—especially on corporate machines, behind proxies, or on Windows with funky cert setups. ✅ Fixes to try (start with the easiest):
bash Copy Edit set NODE_EXTRA_CA_CERTS=C:\Users\admin-aja.office-addin-dev-certs\localhost.crt If you're using PowerShell:
powershell Copy Edit $env:NODE_EXTRA_CA_CERTS="C:\Users\admin-aja.office-addin-dev-certs\localhost.crt" Then re-run the command:
bash Copy Edit npx @microsoft/teamsapp-cli install --file-path "C:\Users\ADMIN-~3\AppData\Local\Temp\manifest.zip" 💡 If that works, you can add it to your start script or .env file. 2. 🔐 Make sure your machine trusts the cert Double-click on localhost.crt and ensure it’s installed in the Trusted Root Certification Authorities store for Local Machine or Current User.
Run:
bash Copy Edit npm config get proxy npm config get https-proxy If they're set and your proxy uses a self-signed cert, that could be breaking things. You’d have to export that cert and pass it as a trusted CA (same as above, using NODE_EXTRA_CA_CERTS). 4. ☠️ As a last resort – skip TLS verification (not recommended in production) Set this env var:
bash Copy Edit set NODE_TLS_REJECT_UNAUTHORIZED=0 ⚠️ Only use this for local debugging! It disables cert checking entirely. Note: You’ve done a solid job getting everything wired up. The manifest is valid, the dev server is running, certs are trusted by Office—it’s just the Teams CLI choking on that cert chain.
So yeah, try step 1 first—set NODE_EXTRA_CA_CERTS to point at your local .crt file and run again. That usually solves it for folks in environments with custom certs or local HTTPS setups.
Let me know what happens or if you want to jump on step 2 together. We’ll get it working.
Its just because of any corrupted file is there, try to restart your system & android studio. if still not work then remove emulator then create new one. Let me know if that works.
In any file type? not just markdown?
It's currently not possible in every file type, with the base editor without a third party extension.
Same thing as trying to bold code comments, it will just read it as plain text unless its a markdown or similar file type.
@polygenelubricants breadth first search (BFS) implementation works very well. When I tested it with the original 160 Rush Hour layouts I found 2 interesting things - first, Set 2 "intermediate" 1 was solved in 21 moves which is 1 better than the solution on the game card (!), second (unfortunately) it doesn't seem to handle those layouts which have a horizontal blocking car, for example Set 3 "expert" 30.
It's trivial enough to set the blocker as an intermediate goal, solve for that and then remove the blocker from the state and solve for the original goal.
Below are the 2 solutions for layout Set 2 "intermediate" 1:
Game: [D-1, P+3, B+3, Z-1, Q-3, C-3, O+1, Q+1, A+4, O-1, Q-1, C+3, Q+3, Z+1, B-4, Z-1, Q-3, D-3, C-3, O+3, Q+3, Z+6]
BFS: [B+3, Z-1, Q-3, C-3, Q+3, O+3, A+3, Z+3, Q-3, C+1, B-4, C-1, Q+3, Z-3, Q-1, O-1, D-4, P+3, Q+1, O+1, Z+4]
The game suggests D left 1 square, and later D left 3, the BFS produced one move of D left 4.
Https://memy-app-nameMydia files omy-app-nameMyn my phmy-app-nameMyone and ADMImy-app-nameMyNISTERmy-app-nameMy
ADMINISTER
SELECT Name
FROM Tags
GROUP BY Name
HAVING SUM(CASE WHEN TagId = 1 THEN 1 ELSE 0 END) > 0
AND SUM(CASE WHEN TagId = 2 THEN 1 ELSE 0 END) = 0;
Less important to the question is that it appears that your example was loading %edi instead of %eax .
000000000 8B 04 25 A60D4000 mov 0x400da6,%eax
000000007 8B 3C 25 A60D4000 mov 0x400da6,%edi
00000000e 90 nop
Unless you are loading from an absolute address with position independent code, rip-relative addressing will save you a byte.
000000038 8B 05 680D4000 mov 0x400da6(%rip),%eax
Oracle has two layers of network protection:
VCN Security Lists
Network Security Groups (NSGs) – optional but often used
You need to check both.
Go to the Oracle Cloud dashboard
Navigate to Networking > Virtual Cloud Networks
Click on your VCN → Go to Security Lists
Check your subnet’s security list
Make sure it has Ingress rules like:
Source CIDR Protocol Port Range
0.0.0.0/0 TCP 80
0.0.0.0/0 TCP 443
if you already have this:
If your instance is part of a Network Security Group, you also need to allow traffic in there.
Go to Networking > Network Security Groups
Choose the NSG attached to your instance
Make sure the same port 80/443 ingress rules exist there too
Check Your Public IP
Make sure you're using the correct public IP address (not the private one)
Run curl http://your_public_ip
from your local machine. Still not working? Probably still a firewall issue.
Tell me if you have anymore info
On Windows, the default maximum path length is 260 characters. Git sometimes hits this limit when you have deeply nested folders or long filenames (e.g., spark-jobs/long-folder-name/.../file-name.yml
).
If you're on Windows 10 or later, run this in your Git Bash or CMD:
bash: git config --system core.longpaths true
You might need admin privileges to run this.
This allows Git to use Windows’ newer APIS that support long paths beyond 260 characters.
As a workaround (especially if you're still hitting limits), you can:
bash: C:\src\myrepo
This reduces the total path length Git has to deal with.
3. Check If It's Configured Properly
Run:
bash: git config --show-origin core.longpaths
You should see:
vbnet file:... core.longpaths=true
There seems to be no commands for that feature, but with the Natural Language Dates plugin, you can have autosuggest translations from @today to the current date.
Source: https://forum.obsidian.md/t/how-to-quickly-insert-the-current-date-or-time-in-obsidian/20535
Use code Code runner extension. Check that the gcc compiler path is set in the system environment.
You can handle force/soft updates using Firebase Remote Config.
I built a small open-source Swift framework that does exactly this —
you can check it out here: https://github.com/Manpreet7Kitlab/MeetAppUpdateManager
It supports:
- Firebase Remote Config version check
- Force and Soft update handling
- Customizable UI prompt
You change alert buttons as it seems to you
Nugget for someone:
update "test-dev-api-counter"
SET "ratingTest4" = if_not_exists("ratingTest4",0) + 1
WHERE "idDotTypeId"='wapm.kyhw.person'
AND "relationType" = 'rating'
RETURNING ALL NEW *
Yes, using both depends on your use case. For push notifications, Firebase Cloud Messaging is commonly used. While Firebase offers Real time Database or Firestore for storage, if you’ve already integrated MongoDB, it’s perfectly fine to continue using it for data storage.
Maybe the following approach will help. Try to use usual LLM (like llama, qwen, deeepseek etc..). Specify all possible categories in prompt and ask model to pick categories that fit to some text :)
Also you can get embeddings from all texts and feed them into multiclustering algorithm. Here's an example of such algorithm - https://www.researchgate.net/figure/Proposed-K-multicluster-algorithm_fig2_346086809.
exec(`"C:\Program Files\foo" bar"`);
String html = `<html>
<body>
<p>Hello World.</p>
</body>
</html>
`;
System.out.println("this".matches(`\w\w\w\w`));
ThreeJS provides very little support for FBX materials if it gets a little more complicated, like PBR textures. I understand that you want to avoid Blender if you haven't used it yet, but I think you can manage loading it in and then, if it all looks okay, export the same model in glTF (.glb) format, which works way better in ThreeJS.
I'm also experiencing the same issue — restoring pagination state with `v-model:first` in lazy mode causes an unwanted automatic reset to `first = 0` due to an internal page event emitted right after mount. Even if `first` is correctly set programmatically, PrimeVue still emits a `@page` with `page = 0`, which overrides everything.
This breaks expected behavior in lazy mode. Pagination should behave like a dumb view that reflects whatever data the app gives it, and emit `@page` events **only when the user interacts**.
More details (including a full StackBlitz reproducer and a GitHub issue opened against PrimeVue) here:
👉 https://github.com/primefaces/primevue/issues/7618
If you're facing the same problem, upvote the GitHub issue so the team gives it proper attention. This is critical for apps that rely on persisted pagination state (e.g., from localStorage or Vuex).
If the response is i JSON format . Then to hide sensitive info like userId, email, mobile and any other in response we can use Annotation called @JsonIgnore which will hide the info like email, mobile and userId from response
Laravel 8.0 required php 7.3 or upper its seems your have lower then 7.3 in your command prompt type
php -v
and check php version if its lower the php 7.3 update it
I have been facing the same issue for two days, but I got the solution. I think it's the issue related to Auth (folder or components name) - I have changed the Auth folder to UserAuth and vice versa.
Structure Changes I made:
This solution is working. Kindly check this solution to see if it works for you. I hope it will work.
It happened with me also. In my case, the file which I was trying to copy from remote machine was already there with same file name and it was open at my target machine. Once I closed the file in my target machine, I could successfully copied the file using scp command.
You can handle force/soft updates using Firebase Remote Config.
I built a small open-source Swift framework that does exactly this —
you can check it out here: https://github.com/Manpreet7Kitlab/MeetAppUpdateManager
It supports:
- Firebase Remote Config version check
- Force and Soft update handling
- Customizable UI prompt
Example usage:
ForceUpdateChecker.shared.verifyVersion()
I've faced a similar problem. After deleting caches, the problem persisted until I deleted the Gradle folder. After reopening, IntelliJ started downloading new gradle as it always do
I would use this array of users:
soapBody = soapBody & "<Field Name=""BUPS_x0020_names"">" & _
"<ArrayOfUser><User>[email protected]</User><User>[email protected]</User></ArrayOfUser>" & _
"</Field>"
I did faced similar kind of issue and select user_name() was also returning "dbo" instead of the current logged in user. The reason was the user was given "sysadmin" role. Removing this role fixed the issue for me. I thought this would be helpful for some one facing this issue.
Sometimes, terminal processes may take time due to low CPU configuration or too many environment path configurations. Perform all these operations from your IDE (Android Studio) instead.
The answer is no one is able to provide an answer to the problem.
In my case i just install HtmlToPdfConverter.Core, which resolved my problem.
You could also try a series of 3m-dashes: #11835;
For me that works better than &m-dash; since it produces a solid line with no breaks.
This Issue was fixed in v0.14.7
#You just need to use try and except statements eg:
try:
rev=container_of_reviews.find_element(locator_of_element)
except:
#selenium driver cannot find the element so it will execute this block of code after
# the try block retrieve an error.eg use case:
rev=None
This version of react-router-dom(6.22.0) works fine with the latest version of react npm install [email protected] (install this router version it will work ) rm -rf .parcel-cache npm start
Try it
Well the values() method of HashMap does not guarantee any specific ordering of its elements in the same way HashMap<> does not guarantee any specific ordering of its key-value pairs.
Try printing both values() functions and you shall see this in action yourself.
Why is addEventListener('order') not receiving anything?
One possible cause is your code public SseEmitter subscribe()
does not have @CrossOrigin("*")
.
Is there something I'm missing with SseEmitter.event().name(...) or how Spring formats the stream?
I tested that emitter.send(SseEmitter.event().name("order").data(messageToSend));
worked in an example.
Does using data: gymIdx (a plain number) cause issues?
In emitter.send(SseEmitter.event().name("order").data(messageToSend));
, I replaced messageToSend with 1
and I still got the 1
in the browser. Therefore, a number here was tested to not cause problems in sending the message.
What’s the correct way to verify eventName-based listeners in EventSource with Spring Boot?
Whether it is correct or not depends on your data, in my opinion. If you get what you want in the browser... I think you did the verification eventSource.addEventListener('order', (event) => { console.log('[✅ Event received]', event.data); });
in the browser. This was already some verification. To help speed up the debugging process, I printed the response in the HTML.
eventSource.value.addEventListener("order", (event) => {
const orderResponse = ("[✅ Event received]", event.data);
console.log(orderResponse);
document.querySelector("#orders").innerHTML = document.querySelector("#orders").innerHTML + "<div>" + orderResponse + "</div>";
events.value.push(JSON.parse(event.data));
});
I also printed what Spring Boot showed:
for (int i = 0; i < orders.size(); i++) {
String messageToSend = orders.get(i);
emitter.send(SseEmitter.event().name("order").data(messageToSend));
System.out.println(messageToSend);
TimeUnit.SECONDS.sleep(2);
if (i > 18){
emitter.complete();
}
}
When comparing what Spring Boot showed and what the browser showed, I had the confidence in saying that I had what data I wanted because they were the same on both sides.
{ "id": "1", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "pending" }
{ "id": "2", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
{ "id": "3", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
{ "id": "4", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
{ "id": "5", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
{ "id": "6", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "pending" }
{ "id": "7", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
{ "id": "8", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
{ "id": "9", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "pending" }
{ "id": "10", "time" : "2025-04-17T03:56:54.213627600Z", "OrderStatus": "complete" }
My usable example was here. After starting Spring Boot, I opened http://localhost:8080
in my browser.
The Nuxt files were in the nuxt folder for npm install
and npm run dev
. Then, I opened http://localhost:3000
in my browser.
The answer is, as @red_menace points out, is that the text string must be explicitly cast as a date object
set startDate to date "Sunday, April 20, 2025 at 12:00:00 AM"
historicaldata.net provides stock data for delisted tickers.
Already solved, my program can run by add this injection at class MasterDebiturBatchConfig
@Bean
@StepScope
public DebtorReader debtorReader(@Value("#{jobParameters['filePath']}") String filePath) {
return new DebtorReader(filePath);
}
and remove this annotation and constructor at DebtorReader
@Component
@StepScope
@Scope(proxyMode = ScopedProxyMode.TARGET_CLASS)
can you share you code? i also want to make the Note app in swift ios
I found even I intergated amr-nb code into my C ++ project according to your request
2. intergate amr-nb code to the c++ runtime project
but head files missing while compiling process, can you give me some advice?
thanks a lot
One way to get this working is by immediately providing one temporary item (e.g. "Loading ..."), then start the async stuff, and finally replace the temporary loading item by the real items.
I find the reason, because audiotrack is not init correct, correct code beside:
audioTrack = AudioTrack(AudioManager.STREAM_SYSTEM, sampleRateInHz, AudioFormat.CHANNEL_OUT_MONO, audioFormat, bufferSize * 4, AudioTrack.MODE_STREAM)
# I added this to ignore all 3rd party content, but allowed my content
# Also, included list of 3rd party libraries in README.md
!/MyGame
/MyGame/*
!/MyGame/Content
/MyGame/Content/*
!/MyGame/Content/_Content
java.lang.RuntimeException: Font asset not found fonts/fd
at android.graphics.Typeface.createFromAsset(Typeface.java:1159)
at n.ۥۡۧۨ.ۥۣ(Unknown Source:18)
at n.ۥۢ۟۟.run(Unknown Source:10)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:210)
at android.os.Looper.loop(Looper.java:299)
at android.app.ActivityThread.main(ActivityThread.java:8319)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:556)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1038)
I know how to use it. I should use the location in the local code instead of '_prefect_loader_',
Thank you all
I know the "problem", this is because you are creating a VM on Azure with a CORE version.
Microsoft creates 2 version for the same server - operating system:
With Desktop Experience. Is easy to use (with the mouse) but is heavier than the Core Version.
Core Version (PowerShell or CMD) Experience. A cleaner version like the "Linux Servers".
If you are beginner managing servers, I recommend you avoid using Azure Images like "Win2022AzureEditionCore" and learn about basic PowerShell and CMD commands and practice with the WindowsGUI like "Win2022AzureEdition" note that I don´t to involucre the word "Core".
So If you want to learn about the differences between Core and DesktopExperience or learn about the Core Edition please see this video and I hope to help you.
Bye :3 .
My question is who will meet me after my death, and how I will be treated?
I found the answer.
When using Kafka transactions, the acknowledge mode (AckMode) does not actually affect the behavior.
See this discussion for reference:
Spring Kafka Consumer ACKMODE & Producer buffering for Kafka transactions
Even in the TransactionTemplate, it simply executes the code and then commits — there's no coordination with AckMode.
So regardless of whether you set AckMode.BATCH or AckMode.RECORD, the transaction boundary determines the commit, not the acknowledge mode.
https://github.com/spring-projects/spring-framework/blob/39e263fe5d8ba767d22e594cffd02420bab43f2a/spring-tx/src/main/java/org/springframework/transaction/support/TransactionTemplate.java#L127
in case you still have this problem, I think a good workaround is to only return one of these two optimizers with your configure_optimizers method. The other one you can write into a field of your lightning module. This way only one of them will be wrapped as Lightningoptimizer (and thereby contribute towards global_step). I think this does not have any side effects except that now the optimizer states of only the optimizer that you returned with configure_optimizers are serialized together with your model checkpoints.
Best,
Chris
Needed to add @Configuration annotation then matching started to hit to custom filterchain
@EnableWebSecurity
@ComponentScan("com.xxxx.nes.**")
@EntityScan("com.xxxx.nes.model.**")
@EnableJpaRepositories("com.xxxx.nes.model.**")
@Configuration <---
public class ResourceServeConfig {
@Bean
@Order(1)
public SecurityFilterChain actuatorEndpoints(HttpSecurity http) throws Exception {
http
.securityMatcher("/actuator/health")
.authorizeHttpRequests(auth -> auth.anyRequest().permitAll())
.csrf(AbstractHttpConfigurer::disable);
return http.build();
}
@Bean
@Order(2)
public SecurityFilterChain securedEndpoints(HttpSecurity http) throws Exception {
http
.csrf(csrf -> csrf.ignoringRequestMatchers("/**/notifications/**"))
.authorizeHttpRequests(
auth ->
auth
.requestMatchers("/**/notifications/**")
.hasAuthority("SCOPE_notifications")
.anyRequest()
.authenticated()
)
.oauth2ResourceServer(
oauth -> oauth
.jwt(jwt -> jwt.jwtAuthenticationConverter(jwtAuthenticationConverter()))
);
return http.build();
}
private JwtAuthenticationConverter jwtAuthenticationConverter() {
JwtGrantedAuthoritiesConverter converter = new JwtGrantedAuthoritiesConverter();
converter.setAuthorityPrefix("SCOPE_");
converter.setAuthoritiesClaimName("scope");
JwtAuthenticationConverter jwtConverter = new JwtAuthenticationConverter();
jwtConverter.setJwtGrantedAuthoritiesConverter(converter);
return jwtConverter;
}
}
The missing piece not mentioned in any of these answers nor any other search result I can find (though the OP hints at it) is that the date cannot be a string, it must be a string CAST AS A DATE OBJECT
set startDate to date "Sunday, April 20, 2025 at 12:00:00 AM"
Make sure your MetaMask (or other wallet) is unlocked and you're on the correct network (e.g., Ethereum Mainnet, Polygon, etc.).
Some dApps only support specific networks. If you're on the wrong one, the dApp won’t recognize your wallet.
Check the docs here
Both appSettings
and connectionStrings
need to be array, but you not pass connectionStrings
as array instead of using object param connectionStrings object = {}
First thanks for your kindly answer . Sure, I know this ,but even added the seperator after every line or I adjusted them into one single line manualy in make file.,but after typing make command again, there still has such error:
$ make
C:/Program Files/GNU MCU Eclipse/Build Tools/2.12-20190422-1053/bin/make all-recursive
make[1]: Entering directory 'E:/download/opencore-amr-0.1.3'
Making all in amrnb
/usr/bin/sh: line 18: C:/Program: No such file or directory
make[1]: *** [Makefile:344: all-recursive] Error 1
make[1]: Leaving directory 'E:/download/opencore-amr-0.1.3'
make: *** [Makefile:275: all] Error 2
Because the make file is created by configure course,so I guess the original reason is caused by the configure file,but i don't know how to modify the configure file
Thansk for your reply.
Best regards
Stephen
https://bobj-board.org/t/page-headers-in-a-subreport/68337/4
This is useful for me
In the subreport, create a formula: FakePageHeader
WhileReadingRecords;
" "
Go to the ‘Insert’ menu and click ‘Group’. Select the FakePageHeader
formula.
Select the ‘Repeat Group Header on Each New Page’ option, and click ‘OK’. This inserts a new group at the lowest, or innermost, grouping level. You will need to move this group to the highest, or outermost, grouping level.
Go to ‘Report’ menu and click ‘Group Expert’. Use the up arrow to move this newest group up to the top of the list.
Move all the headers that you would like repeated into this Header for the @FakePageHeader group.
Compose Material3 Version 1.4.0-alpha10
minTabWidth: Dp = TabRowDefaults.ScrollableTabRowMinTabWidth
The reflect workaround in another answer is broken because they renamed the field, although no longer needed.
You can also specify missing values during DataFrame creation, specially if you are reading this from a file.
import pandas as pd
df = pd.read_csv("some_file.csv", na_values=["?"])
gradle8+:
I want to change it to compile to the same directory with java:
compileKotlin {
destinationDirectory = file("build/classes/java/main")
}
Reference: https://github.com/oapi-codegen/oapi-codegen https://github.com/oapi-codegen/oapi-codegen/tree/main/examples/petstore-expanded/gin
Environment: go1.24.2, https://go.dev/dl/
Init go module
mkdir petstore
cd petstore
go mod init petstore
Get tool oapi-codegen
go get -tool github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen@latest
After this step, the tool is added in the go.mod
Prepare the config for oapi-codegen The full config options can be referenced in this schema is https://github.com/oapi-codegen/oapi-codegen/blob/main/configuration-schema.json
server.cfg.yaml
package: main
generate:
gin-server: true
output: petstore-server.gen.go
types.cfg.yaml
package: main
generate:
models: true
output: petstore-types.gen.go
Prepare the openapi file
petstore-expanded.yaml
openapi: "3.0.0"
info:
version: 1.0.0
title: Swagger Petstore
description: A sample API that uses a petstore as an example to demonstrate features in the OpenAPI 3.0 specification
termsOfService: https://swagger.io/terms/
contact:
name: Swagger API Team
email: [email protected]
url: https://swagger.io
license:
name: Apache 2.0
url: https://www.apache.org/licenses/LICENSE-2.0.html
servers:
- url: https://petstore.swagger.io/api
paths:
/pets:
get:
summary: Returns all pets
description: |
Returns all pets from the system that the user has access to
Nam sed condimentum est. Maecenas tempor sagittis sapien, nec rhoncus sem sagittis sit amet. Aenean at gravida augue, ac iaculis sem. Curabitur odio lorem, ornare eget elementum nec, cursus id lectus. Duis mi turpis, pulvinar ac eros ac, tincidunt varius justo. In hac habitasse platea dictumst. Integer at adipiscing ante, a sagittis ligula. Aenean pharetra tempor ante molestie imperdiet. Vivamus id aliquam diam. Cras quis velit non tortor eleifend sagittis. Praesent at enim pharetra urna volutpat venenatis eget eget mauris. In eleifend fermentum facilisis. Praesent enim enim, gravida ac sodales sed, placerat id erat. Suspendisse lacus dolor, consectetur non augue vel, vehicula interdum libero. Morbi euismod sagittis libero sed lacinia.
Sed tempus felis lobortis leo pulvinar rutrum. Nam mattis velit nisl, eu condimentum ligula luctus nec. Phasellus semper velit eget aliquet faucibus. In a mattis elit. Phasellus vel urna viverra, condimentum lorem id, rhoncus nibh. Ut pellentesque posuere elementum. Sed a varius odio. Morbi rhoncus ligula libero, vel eleifend nunc tristique vitae. Fusce et sem dui. Aenean nec scelerisque tortor. Fusce malesuada accumsan magna vel tempus. Quisque mollis felis eu dolor tristique, sit amet auctor felis gravida. Sed libero lorem, molestie sed nisl in, accumsan tempor nisi. Fusce sollicitudin massa ut lacinia mattis. Sed vel eleifend lorem. Pellentesque vitae felis pretium, pulvinar elit eu, euismod sapien.
operationId: findPets
parameters:
- name: tags
in: query
description: tags to filter by
required: false
style: form
schema:
type: array
items:
type: string
- name: limit
in: query
description: maximum number of results to return
required: false
schema:
type: integer
format: int32
responses:
'200':
description: pet response
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Pet'
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
post:
summary: Creates a new pet
description: Creates a new pet in the store. Duplicates are allowed
operationId: addPet
requestBody:
description: Pet to add to the store
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/NewPet'
responses:
'200':
description: pet response
content:
application/json:
schema:
$ref: '#/components/schemas/Pet'
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/pets/{id}:
get:
summary: Returns a pet by ID
description: Returns a pet based on a single ID
operationId: findPetByID
parameters:
- name: id
in: path
description: ID of pet to fetch
required: true
schema:
type: integer
format: int64
responses:
'200':
description: pet response
content:
application/json:
schema:
$ref: '#/components/schemas/Pet'
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
delete:
summary: Deletes a pet by ID
description: deletes a single pet based on the ID supplied
operationId: deletePet
parameters:
- name: id
in: path
description: ID of pet to delete
required: true
schema:
type: integer
format: int64
responses:
'204':
description: pet deleted
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
schemas:
Pet:
allOf:
- $ref: '#/components/schemas/NewPet'
- required:
- id
properties:
id:
type: integer
format: int64
description: Unique id of the pet
NewPet:
required:
- name
properties:
name:
type: string
description: Name of the pet
tag:
type: string
description: Type of the pet
Error:
required:
- code
- message
properties:
code:
type: integer
format: int32
description: Error code
message:
type: string
description: Error message
Generate the source code
generate server code
go tool oapi-codegen -config server.cfg.yaml petstore-expanded.yaml
generate models code
go tool oapi-codegen -config types.cfg.yaml petstore-expanded.yaml
add missing dependencies
go mod tidy
implement petstore.go
based on the generated interface
package main
import (
"fmt"
"net/http"
"sync"
"github.com/gin-gonic/gin"
)
type PetStore struct {
Pets map[int64]Pet
NextId int64
Lock sync.Mutex
}
func NewPetStore() *PetStore {
return &PetStore{
Pets: make(map[int64]Pet),
NextId: 1000,
}
}
// sendPetStoreError wraps sending of an error in the Error format, and
// handling the failure to marshal that.
func sendPetStoreError(c *gin.Context, code int, message string) {
petErr := Error{
Code: int32(code),
Message: message,
}
c.JSON(code, petErr)
}
// FindPets implements all the handlers in the ServerInterface
func (p *PetStore) FindPets(c *gin.Context, params FindPetsParams) {
p.Lock.Lock()
defer p.Lock.Unlock()
var result []Pet
for _, pet := range p.Pets {
if params.Tags != nil {
// If we have tags, filter pets by tag
for _, t := range *params.Tags {
if pet.Tag != nil && (*pet.Tag == t) {
result = append(result, pet)
}
}
} else {
// Add all pets if we're not filtering
result = append(result, pet)
}
if params.Limit != nil {
l := int(*params.Limit)
if len(result) >= l {
// We're at the limit
break
}
}
}
c.JSON(http.StatusOK, result)
}
func (p *PetStore) AddPet(c *gin.Context) {
// We expect a NewPet object in the request body.
var newPet NewPet
err := c.Bind(&newPet)
if err != nil {
sendPetStoreError(c, http.StatusBadRequest, "Invalid format for NewPet")
return
}
// We now have a pet, let's add it to our "database".
// We're always asynchronous, so lock unsafe operations below
p.Lock.Lock()
defer p.Lock.Unlock()
// We handle pets, not NewPets, which have an additional ID field
var pet Pet
pet.Name = newPet.Name
pet.Tag = newPet.Tag
pet.Id = p.NextId
p.NextId++
// Insert into map
p.Pets[pet.Id] = pet
// Now, we have to return the NewPet
c.JSON(http.StatusCreated, pet)
}
func (p *PetStore) FindPetByID(c *gin.Context, petId int64) {
p.Lock.Lock()
defer p.Lock.Unlock()
pet, found := p.Pets[petId]
if !found {
sendPetStoreError(c, http.StatusNotFound, fmt.Sprintf("Could not find pet with ID %d", petId))
return
}
c.JSON(http.StatusOK, pet)
}
func (p *PetStore) DeletePet(c *gin.Context, id int64) {
p.Lock.Lock()
defer p.Lock.Unlock()
_, found := p.Pets[id]
if !found {
sendPetStoreError(c, http.StatusNotFound, fmt.Sprintf("Could not find pet with ID %d", id))
}
delete(p.Pets, id)
c.Status(http.StatusNoContent)
}
Implement the main.go
package main
import (
"log"
"github.com/gin-gonic/gin"
)
func main() {
petStoreAPI := NewPetStore()
router := gin.Default()
RegisterHandlers(router, petStoreAPI)
log.Println("Starting server on :8080")
if err := router.Run(":8080"); err != nil {
log.Fatalf("Failed to start server: %v", err)
}
}
Start the server
go run .
curl the api
curl -I -X GET localhost:8080/pets
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Thu, 17 Apr 2025 01:30:45 GMT
Content-Length: 4
I have faced the same issue is there any solution or sholud I use another controller
probably still on php 7.4 when wordpress is coded to run on 8+
I got this error when running on the studio app in windows,
this may be related to the flatfile sqlight plugin
You could encode "?"
as missing value:
import pandas as pd
data = {"x": [1, 2, "?"], 'y': [3, "?", 5]}
df = pd.DataFrame(data)
print(df.isnull().sum())
# x 0
# y 0
df = df.replace("?", pd.NA)
print(df.isnull().sum())
# x 1
# y 1
Possible reasons:
The for loop is called before showingEventList is assigned with an array.
Or the method getOldEvents is returning a single element in actionArray. So the actionArray.map is creating a single object instead of an array.
For Windows Users:
Powershell opens by default in VS code.
RUN:
python -m venv venv
RUN:
.\venv\Scripts\Activate.ps1
For Linux/Mac Users:
Zsh (Unix shell) opens by default in VS code.
RUN:
python3 -m venv venv
RUN:
source venv/bin/activate
In service B, your method signature returns
ResponseEntity<Mono<Void>>
instead of
ResponseEntity<Void>
Especially when in service A you have
.bodyToMono(Void.class)
You can replace the formula after copying (making sure that the new references will be valid locally). This is what I would do in VBA:
Sheet1.Range("A1").Formula2 = "=" & Mid(Sheet1.Range("A1").Formula2, InStr(Sheet1.Range("A1").Formula2, "'!") + 2)
Do you need help converting to C# (I don't have access to VSTO at work, where I am, but can help later)?
You can send keystrokes to another window using a tool called Autohotkey.
Ok... Still no answer. Fortunately I can now provide it for those who's looking for it. First I'll try to tell step by step what you need, and in the end I'll answer all the questions I asked.
I created a repository where I solved some problems with embedding on some platforms, and provided a template project to create a Mono-embedded app where you can write cross-platform C++ and C# code and build it for all the mentioned platforms. You can take a look at it here: NativeMonoEmbedded-App.
First you should link against the runtime native library (AKA the runtime, Mono CLR, the libcoreclr (even though this is Mono)). It's basically the runtime itself containing the known GC, JIT and other things. You also need Mono API include headers to communicate with the runtime.
You may find different names for this library:
coreclr.dll
on Windows.libcoreclr.so
on Linux.libmonosgen-2.0.so
on Android.Framework libraries (or runtime libraries) are kind of Standard Library of .NET, but in this context it's typically called FCL, BCL or something other like this. Basically it contains all declarations and the implementation of the standard library.
It consists from:
System.Private.CoreLib.dll
- the internal implementation of the core library which contains native code unlike other framework libs.System.*.dll
, Microsoft.*.dll
, WindowsBase.dll
, mscorlib.dll
and netstandard.dll
and others - framework libraries containing managed code. It contains public API interface as well as some implementation details.When you try to initialize the runtime by one of examples you can find on the internet, the runtime will first try to find the CoreLib. It's the main part of framework libs. It definitely contains native code, so should not be shared between multiple platforms/architectures.
If it's missing, the runtime will print something like this to the console:
* Assertion at D:\a\_work\1\s\src\mono\mono\metadata\assembly.c:2718, condition `corlib' not met
So you need to place it either near the executable (as all other framework libs), or you should set assemblies path via mono_set_assemblies_path()
or the MONO_PATH
env variable to tell the runtime where to look for ANY assemblies (including yours). If you have several paths, you separate them with path separators (OS specific: ;
- on Windows, :
- Linux, Android).
Besides the System.Private.CoreLib.dll
, there's other managed framework libs. It's DLLs like System.Runtime.dll
, System.Console.dll
, mscorlib.dll
, netstandard.dll
, Microsoft.CSharp.dll
. Not all of these may be actually required and loaded. It depends on the dependency chain and on what functionality you use.
There's also native libs that's required by framework libs. For Windows it may be msquic.dll
, System.IO.Compression.Native.dll
, Microsoft.DiaSymReader.Native.amd64.dll
. On Linux it's libSystem.Native.so
, libSystem.IO.Compression.Native.so
, libSystem.Globalization.Native.so
, libSystem.Net.Security.Native.so
and so on. On Android it's almost the same as on Linux but you may also find JARs like: libSystem.Security.Cryptography.Native.Android.jar
.
Such native libs must placed in the folder with all managed framework libs, or near the executable. BUT on Android native libs can be also in the libs
APK folder.
Completely optional components that's available on Linux and Android, but not Windows.
One added .NET library I used was not working on Android if I didn't add libmono-component-marshal-ilgen.so
which might have performed some IL-code generation. So sometimes you may need one of those components.
Everything required by the runtime is ready. Now place your managed and native libs. You can again place them near the executable, or in the folder added to assemblies path. Native libs you use through P/Invoke are searched near the executable, and also near the assembly containing the P/Invoke method declaration.
Though when you load libraries using NativeLibrary.Load()
, LoadLibrary()
or dlopen()
, other rules apply. Then you need to place it near the executable, or modify search paths with AddDllDirectory()
, or RPATH on Linux.
I advise you to look at NuGet packages named Microsoft.NETCore.App.Runtime.Mono.*
. They contain necesarry built runtime files for all platforms and architectures. From there you can learn what may be needed.
Download archives and look into it. In the runtimes/[arch]/lib/net[version]/
folder you'll find all managed framework libs, they should contain only managed code. Beside the *.DLL
s, there's also Microsoft.NETCore.App.deps.json
and Microsoft.NETCore.App.runtimeconfig.json
. You don't need them.
runtimes/[arch]/native/
contains native code. Here you find the runtime native lib, the CoreLib, and other important native libs required by the framework libs. Although there can be also hostfxr
and hostpolicy
native libraries, but you don't need them (this is needed for CoreCLR hosting). There's also the include
folder with the Mono API.
As I undertstand those components are optional.
Yes.
So what is required?
The runtime native lib (libcoreclr.so), the CoreLib, and the other framework libs (managed and native).
I need to place the libcoreclr.so and System.Private.CoreLib.dll near the executable?
Yes, but it depends. Typically if you statically link the coreclr.dll
(or libcoreclr.so
), you place all native libs near the executable. On Windows you could also use AddDllDirectory()
, and on Linux you could set RPATH to add shared lib search paths, so then you can place coreclr.dll
/libcoreclr.so
in other places. BUT you of course can dynamically load shared libs and then get pointers to necessary functions, which also allows you to place the runtime lib elsewhere.
System.Private.CoreLib.dll can be anywhere. It always searched near the executable. But more paths can be added to search for.
Or there's more files I need to find somewhere in the build artifacts folder?
Yes. Just corelib is not enough. As I said you need other framework libs like System.Runtime.dll
, System.Console.dll
and so on.
Also another question: so the modern Mono is just a slimmer version of the CoreCLR, but communication between my native executable and the runtime happens via Mono API? Is this understanding correct?
Yes, Mono is kind of slimmer version of CoreCLR. There's separate code for the Mono runtime which is essentially works almost like the old Mono (with its JIT and GC). Though the frameworks libs are shared between both runtimes (some specific overrides may be also present for both). And yes, your understanding is mostly correct.
Also another question: there's no static library of the CoreCLR, I have to link it dynamically? There's really no way to statically link it?
I still don't know, maybe there should be a compilation option for this. Anyway I decided to use shared libs.
Because of JavaScript engine optimizations, especially in V8:
When both operands are known constants (like two literals), the engine can apply constant folding and dead code elimination, making both == and === extremely fast — sometimes even optimized away entirely.
The loose equality == skips the type check when both operands are already the same type, which can make it marginally faster than === in tight loops involving primitives.
However, the difference is negligible and only shows up in artificial micro-benchmarks.
In real-world usage, === is just as fast or faster, and should always be preferred unless coercion is explicitly needed.
How about using a table to fill the and use text formatting:
<div style="border:1px solid #ff0000; height:100px; width:100px;" >
<table width="100%" height="100%"><tbody><tr>
<td valign="bottom" align="center">A Text</td>
</tr></tbody></table>
</div>
I ran into a similar issue when working on a JSON file which had too few indents. In my case, switching the Indent
setting from 2 to 4 increased the default indent size.
In your case, switching the Indent
setting from 4 to 2 might work?
Google OAuth2.0 Scopes page: https://developers.google.com/identity/protocols/oauth2/scopes
To access employeeId, the required OAuth2.0 scope for read only purposes is: https://www.googleapis.com/auth/admin.directory.user.readonly
For full read/write, use: https://www.googleapis.com/auth/admin.directory.user
Note that you will need to enable/add the Admin SDK API. The employeeId is part of the externalIds array in the user resource. Retrieved by calling:
GET https://admin.googleapis.com/admin/directory/v1/users/{userKey}
Response:
"externalIds": [
{
"type": "custom",
"customType": "employee",
"value": "12345"
}
]
It seems you are trying to use xlwings
as a dependency. ~/.local/bin
is intended for executables and after pipx install xlwings
the binary is located there. pipx
only supports apps to be called, not modules to be imported.
I would suggest to keep running the script within a virtual environment and to ensure that the module is properly installed via pip
.
Related question: What is the difference between pipx and using pip install inside a virtual environment?
Check this: vue-deckgl-suite, it integrates Deck.gl with Vue 3, offering declarative components and support for MapLibre, Google Maps, Mapbox and ArcGIS.
Its because of the inconsistent angular.json and package.json I assume. I removed "schematicCollections": ["@angular-eslint/schematics"] from angular.json and everything works
Fiscal year start on 01-Jul(01-07) which calendar year start on 01-Jan(01-01) so FY start after CY date about 6 months(1-7)
= DATE(YEAR(EDATE(A2, 1-7)), 7, 1)
Did you find a solution to this issue?
Remove the null. New versions only take 2 inputs.
And once i moved all the logic into the custom class, it worked.
Earlier i was trying to check some properties with default lambda expression and some with the customType. That didnt work
The other workaround i find here like this
I am not sure if this correct approach?
~~~
llm = ChatOpenAI(temperature=1.0)
class State(TypedDict):
messages : Annotated[list, add_messages]
graph_builder = StateGraph(State)
print(graph_builder)
chat_history = []
def chatbot(state:State):
return {"messages":llm.invoke(state['messages'])}
def past_message_node(state: State):
chat_history.append(state['messages'])
print("Conversation history")
past_message = chat_history
print("Message History:",past_message)
return state
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_node("past_message_node", past_message_node)
graph_builder.add_edge("chatbot", "past_message_node")
graph_builder.add_edge("past_message_node", END)
graph = graph_builder.compile()
chat_history = []
while True:
user_input = input("User: ")
if user_input.lower() in ['quit', 'q']:
print("Chat Ended")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
messages = value["messages"]
if (isinstance(messages, list)):
last_message = messages[-1]
else:
last_message = messages
print("Assistent: ", last_message.content)
~~~
docker-compose
is supported by compose verion 1.
With compose version 2 there is no need for hyphen and it could simply be docker compose
.
I had the same problem, just delete the Debug.log from C:\Users\{username}\AppData\Local\.vstools\Azurite, then try again. Apparently if the log gets too large the emulator fails to start upon debug.
Tha'ts solved with a simple code :
self.customer_widget_instance = CustomerWidget(self.customer_widget)
A colleague has just given me a clue and it turned out to be the correct answer.
When initializing the PrintServer, if you pass in the path of the machine that the printer is on as a parameter and just use GetPrintQueues() without any parameters, then you are talking directly to the networked machine that is the PrintServer, which gets the updated status correctly.
If you initialize the PrintServer without a parameter, and use the flags as I have posted, it is talking to your local machine which is not getting the updated status until it is re-polled (in this case presumably by the "Printers & Scanners" dialog)
I guess the issue is with SSR, need to inject styles as soon as possible. Here is official manual from Next, depends on your router there are 2 different solutions. If you want - post here details what version of router you have and I will help you with code examples.
Installation worked out of the box: sudo apt install gnuplot-qt
. I decided to use the qt version, since I am also very happy with the Python/Matplotlib backend QtAgg (see https://matplotlib.org/stable/users/explain/figure/backends.html)
Calling gnuplot from LaTeX worked also out of the box. In TexStudio / Options / Configure TexStudio... I use pdflatex -synctex=1 -shell-escape -interaction=nonstopmode %.tex
.
\begin{tikzpicture}
\begin{axis}[
xmin=-pi, xmax=pi,
ymin=-2, ymax=2,
title=Sine Wave,
legend pos=south east, legend style={draw=none},
]
% gnuplot script inside brackets {}
\addplot gnuplot[raw gnuplot, id=sin,mark=none,color=violet] {plot sin(x)};
\addlegendentry{sin(x)}
\end{axis}
\end{tikzpicture}
Thank you for your help.
Best regards, philipp
Possibly, since flutter_dotenv 5.0+, it tries to read .env from AssetBundle
. This means, the file should be added as asset:
in pubspec.yaml:
flutter:
assets:
- .env
Then it means, the file will be included into build! And I am actually trying to prevent exposing my .env.
Something like this might be useful.
(%i) table_form(makelist([i,i*10],i,1,5));
The output will be:
1 10
2 20
3 30
4 40
5 50
Hope it helps!
I switched back to MariaDB connector, got it working, and now it works.