If you’re still looking for a tool to find unused code, you might like a new VS Code extension I built – Dart Unused Code. It scans your Flutter/Dart codebase and highlights unused functions (classes and variables coming soon). The extension shows unused elements directly in the editor. Would love to hear your feedback!
Delete the unrecognized flag ( e.g., -fmodules-ts )... C++20 provides support for modules natively. You don't need that flag if compiling with std=C++20, and its liekley that selecting C++20 means that -fmodules-ts is no longer a supported flag.
There is also a way of reading only part of the file if you are memory-bound.
https://www.mathworks.com/help/matlab/ref/audioread.html
audioread
Syntax: [y,Fs] = audioread(filename,samples)
Description: [y,Fs] = audioread(filename,samples) reads the selected range of audio samples in the file, where samples is a vector of the form [start,finish].
There is usually no single way to tell you exactly which RDS instance type you should move to. The usual way to “recommend” the right type is to look at the Database CloudWatch metrics and understand what part of the DB is actually being used.
If you’ve already identified under-utilized databases, the next question is what they’re under-utilizing. For example, if CPU is always low, that means you can step down to a smaller instance in the same family without slowing down the performance. If memory usage is consistently low and you are not touching swap, that is another sign you can safely downsize.
On the other hand, if memory is tight all the time, moving to a memory-optimized family may be the better option.
This can happen if you have multiple nested iframes - all parent frames needs to specify sandbox="allow-forms" or the permission will be denied.
You should be able to get this done with Eventbridge and Lambda . Check this doc to see if this fits your use case
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html
A workaround is to remove the C:\ProgramData\Amazon\EC2Launch\state\.run-once state file in your userdata script, so when ec2 reboots, it will think itself as a newly created ec2 instance and perform initialization by Amazon EC2Launch service.
The service "Amazon EC2Launch" on windows will check if this file exist at boot, if exist then it does nothing. But if not exist then it will perform the normal EC2Launch related works (e.g. run userdata, reset password, ...) and create a blank file .run-once.
this will work for you, BUT, everytime you reboot, Amazon EC2Launch will reset the Administrator's password, and you will have to retrive a new password with your key from console again, but i think its easily fixable by creating an additional user for you to use.
If you’re still looking for a tool to find unused code, you might like a new VS Code extension I built – Dart Unused Code. It scans your Flutter/Dart codebase and highlights unused functions (classes and variables coming soon). The extension works entirely offline, integrates with VS Code and shows unused elements directly in the editor. Would love to hear your feedback!
Use case: astronomy, calculate the appearance of the night sky in ancient times. It's true you don't need an exact date for that, but you need something - typically any day of the year would do. Second use case: I saw a bug report recently for negative Julian dates, in the context of software that was modeling the progress of ice sheets over long time scales.
@bad_coder: Our documentation structure is organized as a number of subdirectories, most of which contain a file named index.rst representing an overview of that directory's contents, which results in a file named index.html. It's the links to these files that I'm hoping to replace with links to the directories themselves (our webserver is configured to treat a path/to/ URL as path/to/index.html).
We use celery queues for long running test tasks, so we want only one task per worker. After fighting this same problem using task_acks_late but getting occasional unwanted prefetches, I did some code spelunking and additional testing using much newer versions than discussed in this thread. (Celery 4.4.7 and 5.4.0 using rabbitmq-server 3.12.11).
I find the following:
As noted in David Wolever's reply,
worker_prefetch_multiplier = 1
does not stop prefetching. It really means "prefetch 1 in addition to the already running task".
As observed using RabbitMQ monitoring (http://myhost:15672), setting
worker_prefetch_multiplier = 0
does indeed seem to stop prefetching. More long-running testing is needed to verify, but reviewing the code shows the prefetch value gets passed (more or less) directly to the RabbitMQ qos calls as prefetch_count. Setting it to zero does not appear to "allow the worker to keep consuming as many messages as it wants". Rather, it sets the qos prefetch_count to zero.
The documentation (https://docs.celeryq.dev/en/stable/userguide/configuration.html) is incorrect about both of these setting values, at least with RabbitMQ.
Your code must have 5 "a" tags.Great!Your "a" tags must have an "href" attribute.Great!Your code must have 5 "img" tags.Great!Your "img" tags must have a "src" attribute.Great!Your 5 books must link to their Wikipedia pages.Try again!
I found that targeting the swiper scrollbar CSS caused knock-on effects with the translate-3d that Swiper had already calculated, so the best solution I found was just setting it via the wrapper:
<!-- Custom Scrollbar -->
<div class="w-md max-w-4/5 mx-auto relative top-6">
<div class="{{ $uniqueId }}-scrollbar swiper-carousel-scrollbar"></div>
</div>
This is tailwind CSS but you get the idea - set the desired width on the wrapper, then max width 80%, center it (margin: 0 auto) and set to position relative since the scrollbar is absolute, and move down slightly so it's not hidden under the slider.
This has worked a treat.
Might be useful to include exactly how you read the file into the records?
Thanks guys! I settled on a script using inotifywait. I was hoping for something more complete than a command for a script I'd write and less comprehensive than systemd.
The problem was the certificate. it was not assembled correctly.
The original solutions that were publishing ok had a reference to System.net.http but did not have a using System... statement in the code.
An option I found appealing was using a "dummy" heatmap from Plots.jl and combining the graphs using @layout.
using Plots, DataFrames, Random
# Data
df = DataFrame(a='a':'e', b = rand(5), c=rand(5))
# Line plot
plot_lines = plot(df.a, df.b)
plot!(df.a, df.c)
# Dummy heatmap
# Get column names and row names
row_labels = df[:, 1]
col_labels = names(df)[2:end]
col_labels = replace.(col_labels, " Change" => "\nChange")
# Extract matrix of values (excluding the first column which is the label)
matrix_vals = Matrix(df[:, 2:end])
# Prepare annotations for each cell
ann = [(j - .5, i - .5, text("$(round(matrix_vals[i,j], digits=2))", 8, :black, :center)) for i in 1:size(matrix_vals,1), j in 1:size(matrix_vals,2)]
ann = reduce(vcat, ann)
# Annotated heatmap
plot_df = heatmap(
string.(col_labels),
string.(row_labels),
fill(0.92, size(matrix_vals)), # all tiles are light gray
xlabel = "", ylabel = "",
color = cgrad([RGB(0.92,0.92,0.92), RGB(0.92,0.92,0.92)]), # enforce light gray
xticks=:auto,
yticks=:auto,
yflip=true,
framestyle = :box,
annotations=ann,
colorbar=false
)
# Adding grid lines
n_rows, n_cols = size(matrix_vals)
for v in 1:n_cols-1
vline!([v], c=:black, lw=.5, label=nothing)
end
for h in 1:n_rows-1
hline!([h], c=:black, lw=.5, label=nothing)
end
# Simple layout
l = @layout [
a{0.7h}
b{0.3h}
]
plot(plot_lines, plot_df, layout = l)
Disk space is too low. Only 0.511GB left on /tmp.
Go to <jenkins_url>/computer/configure, or from the UI Manage Jenkins → Nodes and then click on Configure Monitors (on latest weekly)
Free Disk Space
Free Temp Space
Response Time
click the check box for all the three
Don't mark agents temporarily offline?
It is on the order of the sorting algorithm used by the scheduler for the wait queue, and that doesn’t scale well for larger lists. It likely uses the timer_list structure, which is a linked list, and implements as an insertion sort, which is O(n^2).
For what it is worth, sleep only guarantees to the sleeping process that it will wake up after the deadline has expired. There is no guarantee of order of awakening. Other activity (virtual memory paging) could also impact the output order if the delays are in too small of an increment.
@David Maze Thanks for the response, I am not fully understanding how this works. Can you give me some more advice?
For more context I am using a streamlit app.
Here's what I am trying:
Remove code, settings = Settings() remove from settings.py
Add code settings = Settings() to config.py
Make modules import from config from .config import settings
This will make sure settings are declared once, prevent side effects of Settings() being ran every time I import settings.py, and I can patch it in Pytest.
So here would be the the new implementation based on what you said...
# settings.py
from pydantic_settings import BaseSettings, SettingsConfigDict
from pydantic import Field
class Settings(BaseSettings):
model_config = SettingsConfigDict(
env_file=".env",
)
env_var: str = Field(description="blah blah blah")
# ...
env_var_d: str = Field(description="blah blah blah")
# config.py
from .settings import Settings
def get_settings()
return Settings()
# some_module.py
from .config import get_settings
def some_func()
settings = get_settings()
# conftest.py
import pytest
from .settings import Settings
from unittest.mock import patch
# Does not work as expected
@pytest.fixture(autouse=True, scope="session")
def settings():
with patch('pckg.config.get_settings', Settings(_env_file=".env.test")):
yield
Since I can't delete the question, I added the context that caused the initial confusion.
An option I found more appealing was using a "dummy" heatmap from Plots.jl.
# Data
df = DataFrame(a='a':'e', b = rand(5), c=rand(5))
# Get column names and row names
row_labels = df[:, 1]
col_labels = names(df)[2:end]
# Extract matrix of values (excluding the first column which is the label)
matrix_vals = Matrix(df[:, 2:end])
# Prepare annotations for each cell
ann = [(j - .5, i - .5, text("$(round(matrix_vals[i,j], digits=2))", 8, :black, :center)) for i in 1:size(matrix_vals,1), j in 1:size(matrix_vals,2)]
ann = reduce(vcat, ann)
# Annotated heatmap
plot_df = heatmap(
string.(col_labels),
string.(row_labels),
fill(0.92, size(matrix_vals)), # all tiles are light gray
xlabel = "", ylabel = "",
color = cgrad([RGB(0.92,0.92,0.92), RGB(0.92,0.92,0.92)]), # enforce light gray
xticks=:auto,
yticks=:auto,
yflip=true,
framestyle = :box,
annotations=ann,
colorbar=false
)
# Adding grid lines
n_rows, n_cols = size(matrix_vals)
for v in 1:n_cols-1
vline!([v], c=:black, lw=.5, label=nothing)
end
for h in 1:n_rows-1
hline!([h], c=:black, lw=.5, label=nothing)
end
plot_df
The correct answer is that the blob versioning feature was added on 2019-12-12.
Documentation can be found here - https://learn.microsoft.com/en-us/rest/api/storageservices/version-2019-12-12 (last point).
We need to explicitly specify the x-ms-version in header.
This is the "API version" that doesn't relate to the "version of the blob".
When you don't use the x-ms-version header it by default uses 2009-07-17 which doesn't have the feature.
I further explored why conda search yielded no python packages and found that conda config --show channels yielded an empty channel list. With conda list --show-channel-urls I saw all the packages were in defaults, which I added with conda config --add channels defaults. I was able to execute conda create -n py312 python=3.12 and switch to that environment with conda activate py312.
e1xydGYxXGFuc2lcZGVmZjAKeyBmb250dGJsCltmezAgU3luY29wYXRlO117W2YxIEl0YWxpYztde1tmMiBOdW5pdG87XX0KfQp7Y29sb3J0YmxsO2NyZWQwXGdyZWVuMFxibHVlMDsKXHJlZDAwXGdyZWVuMTI4XGJsdWU0ODsKXHJlZDIyNDBcZ3JlZWUyNTVcYmx1ZTI1NTsKXHJlZDAwXGdyZWVuMTA0XGJsdWUxMzsKXHJlZDI1NVxnbG9yZWcxNjVcYmx1ZTsKfQpcbWFyZ2wxMTQ0MFxtYXJncmwxMTQ0MFxtYXJndDE0NDBcbWFyZ2IyNTAwCgpccGFyZGFuc2lcY3Fcc2EyNDBcc2wzNjBcc2xtdWx0MQpcYnJkcnRcYnJkcnNccnJkcjg1XGJyZHJjZmg0CgpccnJkcmJccnJkc3NccnJkcjg1XGJyZHJjZmw0CmYwXGZzNDBcY2Y0IFBsYXN0aWNhIGFkZGlvIGNvbiBsYSBibG9ja2NoYWluOiBpIHJpZmV1dGkgZGl2ZW50YW5ubyB1bmEgbW9uZXRhIGRpIHNjYW1iawpccGFyCgpccGFyZGFuc2lcY3Fcc2EyNDBcc2wzNjBcc2xtdWx0MQpcaGlnaGhsaWdodDMKZlExXGZzMjQgUmlmaXR1aSBwbGFzdGljaSB0cmFzbG9jaXRpIHRyYW5zZm9ybWF6aW9uZSBpbiB2YWxvcmUgZ3JhenppZSBhbGwgbGEgdGVjbm9sb2dpYSBibG9ja2NoYWluXHJwYXIKXHBhcmQKXHBhcmRhbJnIHNhMTIwXGZpNzIwZlI4XGZzMjIKSXJveSBQbGFzdGljYSBCYW5rIHJldW1lcmEgaSByYWNjb2x0b3JpIGRpIHJpZmV1dGkgZGVpIFBhZXNpIGVtZXJnZW50aSx0cmFjaWFuZG8gY29uIHF1ZXN0YSB0ZWNub2xvZ2lhIHR1dHRpIHBpIGZsdXVzaSBlIGxlIHRyYW5zYXppb25pLiBMaXJlciBpbiBDYW5hZGEgbmVsIDE5MTIgZWRlIHN0ZSBhdHRpdmEgZGFkZSBhZ2hpIGEgaGFpdGksIG5lbGxlIEZpbGlwcGluZSwgaW4gQnJhc2lsZSBlIG5lbCBSZGljYXNmYWtvLCBtYSBwdW50YSBhZCBlc3BhbmRlc2lCbmUgYW5jaGkgaW4gSW5kaWEsIEluZG9uZXNpYSBlIFBhbWFuYS5ccnBhcgpccGFyZGFuc2lcY3Fcc2EyNDBcc2wzNjBcc2xtdWx0MQpJ4XJpIGxhIHRlY25vbG9naWEgbCwgcSIgdCBlIHJpc2VsdGV6YXJlIGlsIHByb2JsZW1hIHBpb3ZlcmdkaW8gZGkgbG9ybyBkb3Jtb3MgZ2FpLiBBbGxlIHB
hfXBvdXMgc2UgYWx0cmUlMjBkaSBjb2xsZXRhcmUgbnZpIGNyb2RpdHMgc21vbGUgZGEgdW5lIHZlcmEnZW50IGRlbGF0YS4KMa
In this particular case, as suggested by several comments, the issue was the NVARCHAR(MAX) columns. After shortening many of them and pulling a couple out to other tables, the basic query is running orders of magnitude faster.
Also helping, there's another table that needs to have an NVARCHAR(MAX) column for most purposes, but there are a couple for which it's not needed; in those couple of cases, projecting the object down to not include that data is also helping tremendously.
@Andrew Henle, numerous standard C function that are also specified to return -1 on an error --> Off hand, I can think of 2 sets: time() & friends and various wc...(). The are many I/O functions that return EOF, yet that is not specified as -1 - just some negative. Posix I/O functions could have done likewise.
How do you plan to use this beast? Let's assume for a moment that you somehow made it to work (which I doubt is possible, not with the syntax you seem to envision) - how would you ensure that ServiceBuilder::MyDynamicTuple means the same type in every translation unit (otherwise, you are asking for an ODR violation)?
What problem are you really trying to solve with this? As stated, it sounds like an XY problem
Pasteboard.image returns bytes when the clipboard contains raw bitmap data(as seen in packages' example) otherwise it returns null
I needed this functionality on Windows, but "CTRL + C" or "Right Click -> Copy" an image is not behaving like this. It contains the file path.
What I did to overcome this is to use final paths = Pasteboard.files(); , iterate over the list and filter out those that end with either .png .jpeg or .jpg and try to parse that path as a file;
final file = File(imagePath);
I'm a Developer Advocate with Google Cloud.
I've been involved with this launch, and what has worked best for users so far is making sure that they're logging in with a personal Gmail account that is not connected to a Workspace. Even a brand new Gmail account should work.
This thread has a lot of suggestions around browser defaults that seem reasonable, but have not been barriers to successful authentication in my experience so far.
Move to other channel and then return to the channel you was before.
For example, channel main
And after upgrading, channel stable.
This reset to the channel and force to download a clean installation for the last build for such channel,
I have a the following problem. When I move to left and after this I move to right, the screen is not at the same position. The window alway scolls more to the left then to the right. How can I solve this. A offset is not working because the window movement is not alway the same. Someone has a ideia?
In my case, it happens when I enabled the deploy key in Orgnization settings, then the personal key does not work on the orgnization repos.
The fix is disable the deploy key.
Even though running "kotlin test.kts" only works if:
kotlin.bat path doesn't have a space in it
context folder (what pwd shows) doesn't have a space
relative/full path to *.kts file is quoted to escape spaces
I can call:
PS H:\My Drive\project> kotlinc -script test.kts
and it works. What is the point of just "kotlin".bat? Idk
My bad, it looks like omitting the return behaves the same as in js, so my issue is caused by something else... I've assumed it was a problem with the recursive call and I can aviod dumping the whole thing.
You seem to be unsure if the return statement is required in php; what happens when you omit the return in the PHP?
So, the PHP returns because otherwise I can't call the function, or can I ? I know the functions are different, that's the question, can I replicate the behavior from js in php ? can I call a function in itself without using the return keyword, because that stop execution and I want the function to continue after the recursive call.
I want the second bracket value.
This is working for me,
.my-table:has(tbody > tr > th) thead th:first-child {
border-bottom: 1px solid #ddd !important;
}
If it is a "shortener" link eg t.co, bit.ly, tinyurl.com, goo.gl, you can use https://www.whatsmydns.net/url-unshortener
(Another variant of this question: https://webapps.stackexchange.com/questions/31748/how-can-i-track-down-t-co-links)
What are your exact requirements for the resulting string, e.g. do you want the second bracketed value? or all bracketed values?
A hash is a function (many-to-one), so a value must always map to the same hash, which violates your second rule. So the short answer is no, but you can have a probabilistic function that assigns left digits higher probabilities but not sure how useful that would be.
prompt() is a blocking function, meaning it pauses the script until the user interacts with it. You would need to separate out your functions and call the next piece with an event listener in your modal.
There is currently a major outage: https://www.githubstatus.com/ affecting all git operations.
Also both function are not the same! Look at the detail. The JS version does NOT return, but the PHP does. That is why the after is never called.
What's the purpose of the "after" statement? These functions do slightly different things and the JS function would also not display the second `console.log()` if you used a return statement there.
If you want to return a value you will need a return statement. Also it is best practice to avoid an infinite loop to always have an exit condition, preferable at the beginning of the recursive function (early exit strategy).
the only way to know about other tabs is if your tab spawned them. As the opener it could receive messages back from other tabs. (see postMessage: https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage ) There are some response headers that would also control what an opener could do with that tab handle. (such as "Cross-Origin-Opener-Policy": https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Cross-Origin-Opener-Policy ) For instance, navigating away from initial URL.
For me, GitHub was just down, which you can check here: https://www.githubstatus.com/
I ENTERED THE WRONG WAREHOUSE NAME!!!
dbt_warehouse instead of dbt_wh
If you're able to edit the html, you can add a span around the checkbox, and style that instead of styling the checkbox directly.
<style>
label {
float: left;
margin: 5px 0px;
width: 150px;
}
label + * {
margin: 5px 0px;
width: 200px;
}
.clear {
clear: both;
}
</style>
<div>
<label for="autologin2">Remember Me 2</label>
<span><input type="checkbox" class="checkbox" id="autologin2" name="autologin2" value="1"></span>
<div class="clear">
</div>
You can securely put a Google Map in your email, your web app secretly grabs the map picture using your hidden key, and then sends that picture along with the email instead of asking the recipient's mail program to find it.
Not really sure why the number of syscalls would matter in that discussion if both FileReader, InputStreamReader having an internal buffering mechanism perform the same number of syscalls as the BufferedReader.
Thank you all for your answers. I'm still somewhat confused about how in my example ap isn't sent by pointer to va_arg() but needs to be sent by pointer to the function calling va_arg() or something very bad might happen. Maybe my confusion stems from me thinking va_arg() is a function while it's actually a macro, which is very different now that I think about it.
Richard, I also tried mixing types up between int and long but couldn't get any incorrect or unexpected return from va_arg(). I tried to mishandle it by giving it incorrect types but didn't manage to break it.
Thank you for mentioning that stdarg.h and stdio.h are bad practice. I still need to use those as part of my C programming course but at least now I know not to use them moving forward.
Because there was simply no Java library that could control the original LEGO® Education Inventor Hub (51515) with its factory firmware, I built RemoteBrick completely from scratch.
Open-source project → https://github.com/juniorjacki/RemoteBrick
Now you can finally write real Java programs for SPIKE™ Prime and Robot Inventor – no flashing, no Pybricks, no workarounds needed.
RemoteBrick communicates directly with the hub via Bluetooth Classic (SPP) and gives you 100 % of the features the official LEGO app has – just in pure Java.
Current Features in v1.3.0:
Quick Start (3 Steps)
Example – Connect & Drive
import de.juniorjacki.remotebrick.Hub;
import de.juniorjacki.remotebrick.devices.Motor;
import de.juniorjacki.remotebrick.types.*;
public class Demo {
public static void main(String[] args) throws InterruptedException {
try (var hub = Hub.connect("AA:BB:CC:DD:EE:FF")) { // ← your hub's MAC
if (hub == null) {
System.out.println("Connection failed!");
return;
}
Motor left = (Motor) hub.getDevice(Port.A);
Motor right = (Motor) hub.getDevice(Port.B);
// Heartbeat animation + drive forward 3 seconds
hub.getControl().display().animation(Animation.HEARTBEAT, true).send();
hub.getControl().move().startSpeeds(left, right, 75, 75, 100).send();
Thread.sleep(3000);
hub.getControl().move().stop(left, right, StopType.BRAKE).send();
System.out.println("Done! Battery: " + hub.getBatteryPercentage() + "%");
}
}
}
I had the same issue until I changed my default browser to Chrome. I don't know if this is a universal fix, but if you already meet the requirements in the FAQ and nothing else has worked, maybe try this.
Same issue. In some of X threads like this: https://x.com/antigravity/status/1990813606217236828 (comments section), they mentioned that they're working on resolving these overload issues.
Looks like a lot more people are trying to download this. Since they've released similar tools like Jules earlier, those were not really clubbed with major model releases like Gemini 3.0. This time they've clubbed the 2, so they seem to have received a lot more traffic than expected.
It should be fixed in a few hours when the traffic decreases.
Tried this work around and was able to complete the process successfully
Work Around : In Terminal Execution Policy - uncheck "Use the default allowlist for the browser"
Same issue here - I am using a personal - pretty typical :(
var hasValue = Application.Current.Resources
.TryGetValue("Primary01", out object primaryColorObj);
if (hasValue && primaryColorObj is Color primaryColor)
rtn = primaryColor;
else
rtn = Color.FromHex("#173880");
Same exact issue, saw in documentation it needs to be a personal account, so an account with a non workspace but I used my personal as well and still stuck
My code and everything was 100% the issue was in the iOS 26
iOS 26 has issues with notifications sounds and had to restart the real phone and push it into ringing mode and silent mode in order for the notifications sounds to work correctly.
i realized that after i tested the app on iPhone with iOS 17.4 and the sounds worked immediately.
Adding to @TheDoomDestroyer answer; disabling only those settings didn't worked.
I also had to **disable ** this new one which was turned on by default: 'Limit job authorization scope to referenced Azure DevOps repositories'
# He_100M_e.txt dosyasını oluştur
with open("He_100M_e.txt", "w") as f:
f.write("H" + "e" \* 100_000_000)
I had the exact same problem and finally figured out what I was doing wrong. The requests module references the urllib3 module which references the ssl module. I had a file in my project called ssl.py and so the import ssl directive was picking up my file instead of the module; hence, the missing methods error.
With conditional number formatting you can skip displaying "0h" and "0mn":
[<0.000694][ss]"s";[<0.04166][m]"m" ss"s";[h]"h" mm"m" ss"s"
This will display 2m 35s and 59s instead of 0m 59s.
Do keep in mind that in Excel, 1 = 1 day, so 1 hour is 1/24, 1 minute is 1/(24*60) and 1 second is 1/(24*3600).
add this before the webview
WebKit2.WebView.static_type();
use this and your problem should be fix
frame = tk.Frame(root, width=600, height=400, background="seashell3")
frame.pack(padx=10, pady=10) # pack it on the next line
tk.Label(frame, text="Select a file to encrypt:", fg="black").pack(pady=10)
Two solutions to this problem can be found at https://learn.microsoft.com/en-us/answers/questions/5600456/excel-vba-strange-runtime-error-6-overflow?page=1&orderby=Helpful&comment=answer-12314719&translated=false#newest-answer-comment. The simplest is to add right after the Debug.Print statement:
DoEvents ' Force UI refresh
We can now use MKReverseGeocodingRequest.
A quickish go with my own preferred tools. Is this what you're describing?
library(data.table)
library(fjoin) # install.packages("fjoin", repos=c("https://trobx.r-universe.dev"))
# individual-specific end dates
df_limits <- fread("
id end_date
1 2021-06-15
2 2018-09-03
3 2016-03-30
")
# spells of care
df_spells <- fread("
id spell start end
1 1 2015-01-13 2015-07-19
1 2 2016-04-14 2017-02-07
1 3 2018-05-24 2019-10-15
2 1 2015-07-23 2017-01-05
2 2 2018-02-13 2018-06-04
2 3 2019-07-31 2021-02-04
3 1 2015-02-16 2016-11-19
3 2 2018-04-29 2020-03-01
")
START <- as.IDate("2014-01-01")
# how many months back do we need to go per id?
df_limits[, months_back := 12L * (year(end_date) - year(START)) + month(end_date)]
# expand to a grid for each id
grid <- df_limits[, .(id, end_date, month_end=seq(end_date, length.out = months_back, by = "-1 month")), by=.I][, I := NULL]
grid[, month_start:= fifelse(id==shift(id, type="lead"), shift(month_end, type="lead") +1L, START)]
grid[.N, month_start := START]
setcolorder(grid, c("id", "end_date", "month_start", "month_end"))
setkeyv(grid, c("id", "month_start", "month_end"))
# overlaps of care spells with the grid for each patient
ans <- fjoin_left(grid,
df_spells,
on = c("id", "month_end >= start", "month_start <= end"),
mult.x = "first", # in case there are overlapping care spells (we don't want multiple "hits")
indicate = TRUE) # 3L if match, 1L if not
ans[, in_care := .join==3L]
# output (for patient 2)
ans[id==2L]
Key: <id, month_start, month_end>
.join id end_date month_start month_end spell start end in_care
<int> <int> <IDat> <IDat> <IDat> <int> <IDat> <IDat> <lgcl>
1: 1 2 2018-09-03 2014-01-01 2014-01-03 NA <NA> <NA> FALSE
2: 1 2 2018-09-03 2014-01-04 2014-02-03 NA <NA> <NA> FALSE
3: 1 2 2018-09-03 2014-02-04 2014-03-03 NA <NA> <NA> FALSE
4: 1 2 2018-09-03 2014-03-04 2014-04-03 NA <NA> <NA> FALSE
5: 1 2 2018-09-03 2014-04-04 2014-05-03 NA <NA> <NA> FALSE
6: 1 2 2018-09-03 2014-05-04 2014-06-03 NA <NA> <NA> FALSE
7: 1 2 2018-09-03 2014-06-04 2014-07-03 NA <NA> <NA> FALSE
8: 1 2 2018-09-03 2014-07-04 2014-08-03 NA <NA> <NA> FALSE
9: 1 2 2018-09-03 2014-08-04 2014-09-03 NA <NA> <NA> FALSE
10: 1 2 2018-09-03 2014-09-04 2014-10-03 NA <NA> <NA> FALSE
11: 1 2 2018-09-03 2014-10-04 2014-11-03 NA <NA> <NA> FALSE
12: 1 2 2018-09-03 2014-11-04 2014-12-03 NA <NA> <NA> FALSE
13: 1 2 2018-09-03 2014-12-04 2015-01-03 NA <NA> <NA> FALSE
14: 1 2 2018-09-03 2015-01-04 2015-02-03 NA <NA> <NA> FALSE
15: 1 2 2018-09-03 2015-02-04 2015-03-03 NA <NA> <NA> FALSE
16: 1 2 2018-09-03 2015-03-04 2015-04-03 NA <NA> <NA> FALSE
17: 1 2 2018-09-03 2015-04-04 2015-05-03 NA <NA> <NA> FALSE
18: 1 2 2018-09-03 2015-05-04 2015-06-03 NA <NA> <NA> FALSE
19: 1 2 2018-09-03 2015-06-04 2015-07-03 NA <NA> <NA> FALSE
20: 3 2 2018-09-03 2015-07-04 2015-08-03 1 2015-07-23 2017-01-05 TRUE
21: 3 2 2018-09-03 2015-08-04 2015-09-03 1 2015-07-23 2017-01-05 TRUE
22: 3 2 2018-09-03 2015-09-04 2015-10-03 1 2015-07-23 2017-01-05 TRUE
23: 3 2 2018-09-03 2015-10-04 2015-11-03 1 2015-07-23 2017-01-05 TRUE
24: 3 2 2018-09-03 2015-11-04 2015-12-03 1 2015-07-23 2017-01-05 TRUE
25: 3 2 2018-09-03 2015-12-04 2016-01-03 1 2015-07-23 2017-01-05 TRUE
26: 3 2 2018-09-03 2016-01-04 2016-02-03 1 2015-07-23 2017-01-05 TRUE
27: 3 2 2018-09-03 2016-02-04 2016-03-03 1 2015-07-23 2017-01-05 TRUE
28: 3 2 2018-09-03 2016-03-04 2016-04-03 1 2015-07-23 2017-01-05 TRUE
29: 3 2 2018-09-03 2016-04-04 2016-05-03 1 2015-07-23 2017-01-05 TRUE
30: 3 2 2018-09-03 2016-05-04 2016-06-03 1 2015-07-23 2017-01-05 TRUE
31: 3 2 2018-09-03 2016-06-04 2016-07-03 1 2015-07-23 2017-01-05 TRUE
32: 3 2 2018-09-03 2016-07-04 2016-08-03 1 2015-07-23 2017-01-05 TRUE
33: 3 2 2018-09-03 2016-08-04 2016-09-03 1 2015-07-23 2017-01-05 TRUE
34: 3 2 2018-09-03 2016-09-04 2016-10-03 1 2015-07-23 2017-01-05 TRUE
35: 3 2 2018-09-03 2016-10-04 2016-11-03 1 2015-07-23 2017-01-05 TRUE
36: 3 2 2018-09-03 2016-11-04 2016-12-03 1 2015-07-23 2017-01-05 TRUE
37: 3 2 2018-09-03 2016-12-04 2017-01-03 1 2015-07-23 2017-01-05 TRUE
38: 3 2 2018-09-03 2017-01-04 2017-02-03 1 2015-07-23 2017-01-05 TRUE
39: 1 2 2018-09-03 2017-02-04 2017-03-03 NA <NA> <NA> FALSE
40: 1 2 2018-09-03 2017-03-04 2017-04-03 NA <NA> <NA> FALSE
41: 1 2 2018-09-03 2017-04-04 2017-05-03 NA <NA> <NA> FALSE
42: 1 2 2018-09-03 2017-05-04 2017-06-03 NA <NA> <NA> FALSE
43: 1 2 2018-09-03 2017-06-04 2017-07-03 NA <NA> <NA> FALSE
44: 1 2 2018-09-03 2017-07-04 2017-08-03 NA <NA> <NA> FALSE
45: 1 2 2018-09-03 2017-08-04 2017-09-03 NA <NA> <NA> FALSE
46: 1 2 2018-09-03 2017-09-04 2017-10-03 NA <NA> <NA> FALSE
47: 1 2 2018-09-03 2017-10-04 2017-11-03 NA <NA> <NA> FALSE
48: 1 2 2018-09-03 2017-11-04 2017-12-03 NA <NA> <NA> FALSE
49: 1 2 2018-09-03 2017-12-04 2018-01-03 NA <NA> <NA> FALSE
50: 1 2 2018-09-03 2018-01-04 2018-02-03 NA <NA> <NA> FALSE
51: 3 2 2018-09-03 2018-02-04 2018-03-03 2 2018-02-13 2018-06-04 TRUE
52: 3 2 2018-09-03 2018-03-04 2018-04-03 2 2018-02-13 2018-06-04 TRUE
53: 3 2 2018-09-03 2018-04-04 2018-05-03 2 2018-02-13 2018-06-04 TRUE
54: 3 2 2018-09-03 2018-05-04 2018-06-03 2 2018-02-13 2018-06-04 TRUE
55: 3 2 2018-09-03 2018-06-04 2018-07-03 2 2018-02-13 2018-06-04 TRUE
56: 1 2 2018-09-03 2018-07-04 2018-08-03 NA <NA> <NA> FALSE
57: 1 2 2018-09-03 2018-08-04 2018-09-03 NA <NA> <NA> FALSE
.join id end_date month_start month_end spell start end in_care
If this issue comes in IntelliJ Idea, then it's problem with the ssh key paraphrase.
I would suggest deleting the ssh keys both from local machine and GitHub account.
Generate new set of keys without paraphrase and do the setup, this time it should work.
for deleting the ssh keys, go to ssh folder
For mac os , linux
Execute these commands
1.cd ~/.ssh
2.ls
check for keys in the output of above ls command.
delete the keys which start with id_rsa or which start with id.
For windows as well Process remains same, deleting the keys and generating again without parapharse, but i am unable to give the commands.
Try using the JAR directly:
java -jar C:\kotlinc\lib\kotlin-compiler.jar file.kts
You could also try putting only kotlinc in a no-space path (e.g. C:\Kotlin) or running scripts through cmd.exe.
In the end, its important to note that the problem isn’t with Kotlin itself.
I was trying to avoid having to edit my specifications to store the Id in a property but that does indeed work. However, the API for the Ardalis library might have changed since there is no Criteria property in the spec object. It does have a WhereExpressions collection with one entry but nowhere in there do I find a textual representation of my Guid.
/*
Source - https://stackoverflow.com/a/17231406
Posted by wiggles, modified by community. See post 'Timeline' for change history
Retrieved 2025-11-19, License - CC BY-SA 3.0
*/
@media Smartphone
// Styles go here
@media Tablet
// Styl
es go here
What is the use case for having a date before day o which is earlier than 4000BC - does anything have a precision that needs a day?
Maintainer here 👋
Thanks for reporting this issue. It was introduced in the recent release (1.4.65) of the library. It has now been fixed in the latest version, i.e. 1.4.66
See the release notes- https://github.com/Abhinandan-Kushwaha/react-native-gifted-charts/releases/tag/v1.4.66
Are you looking for std::variant?
Maybe make the keyframes animation have an end (100%) that does not change how it initially was but no start (0%) at all so that it can start off from anywhere?
I've played with this question a bit and the answer is definitely yes for CRC-32-###. Which does not require brute force methods. So, you can make a fast self refencing string, if you know the magic incantations and a few tricks.
the cRC of THiS StRinG IS OBviouslY 0XBEefc0de.
If the trick isn't obvious here you can toggle case to adjust the CRC of a string, without changing the meaning.
But of course, you need to know which bits need to be toggled. And here linear algebra is your friend. because CRC is linear... it means at least for this problem. If you toggle a bit in the string the output crc will have a set of toggles associated with that toggle. So once you have 32 things you can toggle you can make a 32x32 matrix and solve for a set of toggles which have the properties you want.
thE aNsWEr To LIFE thE UNIVERse aNd eVERYthing is actually 0x42!
I used a simple thing that just tries different combinations of caps. But it is kind of obvious? isn't it?
# Source - https://stackoverflow.com/a/44233443
# Posted by Dig-Doug
# Retrieved 2025-11-19, License - CC BY-SA 3.0
from os import walk
import os
# TODO - Change these to correct locations
dir_path = "/tmp/stacktest"
dest_path = "/tmp/stackdest"
for (dirpath, dirnames, filenames) in walk(dir_path):
\# Called for all files, recu\`enter code here\`rsively
for f in filenames:
\# Get the full path to the original file in the file system
file_path = os.path.join(dirpath, f)
\# Get the relative path, starting at the root dir
relative_path = os.path.relpath(file_path, dir_path)
\# Replace \\ with / to make a real file system path
new_rel_path = relative_path.replace("\\\\", "/")
\# Remove a starting "/" if it exists, as it messes with os.path.join
if new_rel_path\[0\] == "/":
new_rel_path = new_rel_path\[1:\]
\# Prepend the dest path
final_path = os.path.join(dest_path, new_rel_path)
\# Make the parent directory
parent_dir = os.path.dirname(final_path)
mkdir_cmd = "mkdir -p '" + parent_dir + "'"
print("Executing: ", mkdir_cmd)
os.system(mkdir_cmd)
\# Copy the file to the final path
cp_cmd = "cp '" + file_path + "' '" + final_path + "'"
print("Executing:
# Source - https://stackoverflow.com/a/44233443
# Posted by Dig-Doug
# Retrieved 2025-11-19, License - CC BY-SA 3.0
from os import walk
import os
# TODO - Change these to correct locations
dir_path = "/tmp/stacktest"
dest_path = "/tmp/stackdest"
for (dirpath, dirnames, filenames) in walk(dir_path):
\# Called for all files, recu\`enter code here\`rsively
for f in filenames:
\# Get the full path to the original file in the file system
file_path = os.path.join(dirpath, f)
\# Get the relative path, starting at the root dir
relative_path = os.path.relpath(file_path, dir_path)
\# Replace \\ with / to make a real file system path
new_rel_path = relative_path.replace("\\\\", "/")
\# Remove a starting "/" if it exists, as it messes with os.path.join
if new_rel_path\[0\] == "/":
new_rel_path = new_rel_path\[1:\]
\# Prepend the dest path
final_path = os.path.join(dest_path, new_rel_path)
\# Make the parent directory
parent_dir = os.path.dirname(final_path)
mkdir_cmd = "mkdir -p '" + parent_dir + "'"
print("Executing: ", mkdir_cmd)
os.system(mkdir_cmd)
\# Copy the file to the final path
cp_cmd = "cp '" + file_path + "' '" + final_path + "'"
print("Executing:
", cp_cmd)
os.system(cp_cmd) ", cp_cmd)
os.system(cp_cmd)
In my case it was waiting for the OTP/Passcode again which didn't populate correctly in the Command Center. The fix was enabling VSCode to Show the login terminal when connecting to a remote SSH host. I enabled that and found that it was waiting for a Passcode.
Set: "remote.SSH.showLoginTerminal": false to true
Using older versions of VSCode and Remote SSH Plugin helps otherwise.
If you'd like a code analogy:
Dimension Tables is a "Enum type definition". Every Dimension row is a variant of this "Enum".
E.g. a list of birds known to you.
Fact Table is the "Enum" variable holding a single value.
E.g. historical list of birds you have seen over time. Each occurrence references the bird type you've encountered.
The only difference with Enums you know is that in the code an Enum represents a fixed set of variants known beforehand.
But in case of databases those sets are not fixed and they can grow (or even shrink) over time.
I was wondering the same thing. Yes, you definitely can.
Right-click on the folder with the code files -> Open with -> Open with Visual Studio
If it's uploaded on GitHub, click on the green drop-down-menu Code -> Open with Visual Studio
Your script only looks at the first level of parts, so attachments nested deeper get skipped. By recursively traversing all parts and checking both `attachmentId` and inline `data`, you should be able to download the images correctly.
To fix it, you need to recursively walk through all parts of the message payload and handle both cases (`attachmentId` vs inline `data`).
Try this:
def save_attachments(service, msg_id, payload, save_dir="attachments"):
attachments = []
def walk_parts(parts):
for part in parts:
filename = part.get("filename")
mime_type = part.get("mimeType")
body = part.get("body", {})
if filename and (body.get("attachmentId") or body.get("data")):
if body.get("attachmentId"):
attach_id = body["attachmentId"]
attachment = service.users().messages().attachments().get(
userId="me", messageId=msg_id, id=attach_id
).execute()
data = attachment.get("data")
else:
# Inline base64 data
data = body.get("data")
if data:
file_data = base64.urlsafe_b64decode(data)
filepath = os.path.join(save_dir, filename)
with open(filepath, "wb") as f:
f.write(file_data)
attachments.append(filename)
# Recurse into nested parts
if "parts" in part:
walk_parts(part["parts"])
walk_parts(payload.get("parts", []))
return attachments
(PS: I am new to stackoverflow, so all comments are appreciated)
This would be a different thing, but if browsers have restrictions on it for the clear security risks, then how does software like Securly block pages? It seems that the browsers would not allow it, but it is allowed and used by many schools. How does that work?
Those messages do not indicate that JGroups is being started. It's just that the configuration parser has builtin jgroups stacks it can use if necessary. When JGroups starts you will see logs coming from GMS and other protocols.
If I had to guess, I’d call plt.show(block=False) followed by plt.pause(0) before your first measurement.
This forces the GUI backend to finalize the window and apply fullscreen, so fig.get_size_inches() is correct immediately.
fig.canvas.manager.full_screen_toggle()
plt.show(block=False)
plt.pause(0) # forces one GUI event cycle
The figure does not have its true on-screen size until the GUI event loop runs once. Fullscreen mode is only applied during that first event cycle. plt.pause(0) is the clean way to let the backend process pending events without adding delays, so the correct size is available at the first measurement.
(PS: All comments are appreciated, I’m new around here)
Thanks to comments to my question, I found a solution that satisfied my requirements.
I ended up writing the following in the requirements.txt file:
--no-binary just_playback~=0.1.8
just_playback~=0.1.8
And installing it with pip like this:
pip install -r requirements.txt .
Wait-for-completion wait for a once-off event e.g. a data structure to be initialized by another thread. They are built on top of the existing waitqueue infrastructure. The kernel documentation describing them is very good kernel.org/doc/Documentation/scheduler/completion.txt
I need to increase processing speed so that backfilling daily histories takes hours, not days or weeks.
At this time, I've taken up a suggestion to build my own URL parsing engine in Rust (using the url and publicsuffix crates) for Python. I've been successful so far. The Rust program builds successfully and I am able to import the new library into python and run it. Results looks reasonable so far. I have a bit more pipeline work to do. I'll share the results when I'm done.
Clangd treats .cu files as CUDA only, so host‑side C++ features like <format> and <chrono> aren’t recognized by default. The long‑term fix should be to tell clangd to use your GCC toolchain and enable modern C++ explicitly:
If:
PathMatch: ".*\\.cu"
CompileFlags:
Add:
- -std=c++20
- --gcc-toolchain=/usr
This way clangd parses host code with C++20 and finds the right headers automatically, without any silenced warnings and such. It’s future‑proof since clangd will follow your toolchain instead of hard‑coded paths.
(PS: All comments appreciated, I’m new around here)
thank you very much for your explanations. I thought that re-mapping causes this data length.
cloud export data is only determined by its new value credentials
this is the json_object
this is basically how gmail works
i did know this long ago, but i dont know if i really like gmail that much
export dump pumps line 1 and calls dbs_cloud
This is not the same as a concat-and-assign operator, but it can help to declutter code with lots of foo = foo .. bar.
local function add(str)
result = result .. str
end
You should be doing this in PHP as html. A CSS Grid container with Flex-Direction:row & grid-template-columns:repeat(3, 1fr) & grid-template-rows:auto will automatically take your SQL data that you format within it and arrange them 3 across breaking to a new line after each group of 3.
My solution was to return the index.html when 404 happens. It applicable to any SPA apps, not just Angular. I am using Vuejs for example.
P.S. The solution of the accepted answer didn't work for me. I mean it simply redirects to the index.hml, while I want the page to stay with the same url. For example hitting the url mysite/user/123 should show me the user details instead of redirecting to the index.html (home) page.
Unfortunately, after a ton of research and testing, when working with client smart cards the server does not and will not have access to the user smartcard certificate to utilize it to make calls from the server to the server, even on behalf of the client.
The problem cannot be resolved as a server side only solution. The client browser must make the appropriate API calls and supply the credentials to authenticate/authorize with the endpoint API.
BLUF, at this time, this is not possible using Blazor server-side application that has endpoints that require end user smart card certificate authentication.
This is my data structure
{
"profileId":5483,
"name":"BLA BLA",
"events":[
{"id":2554,"typeId":1,"detail":"details follows EN",
"dates":[
{"id":2558,"eventDate":"2015-11-20","startTime":"2015-11-20"}
]
},
{"id":2555,"typeId":2,"detail":"BLA BLA",
"dates":[
{"id":2559,"eventDate":"2015-11-21","startTime":"2015-11-21"},
{"id":2560,"eventDate":"2015-11-22","startTime":"2015-11-22"}
]
}
]
}
@for (date of datesForArr.controls; let j = $index; track $index) {
///// datesForArr.controls is null
}
get eventsForArr(): FormArray {
return this.myForm.get('events') as FormArray;
}
get datesForArr() {
return this.myForm.get('dates') as FormArray;
}