this has runnable coplab code if it helps.
https://keras.io/examples/timeseries/eeg_signal_classification/#setup-and-data-downloads
Big thanks to @Tsyvarev for the comment:
You are specifying
${brt_SOURCE_DIR}/include
as include directory. Assuming the full path to the headerSourceModelBase.hpp
is${brt_SOURCE_DIR}/include/SourceModels/SourceModelBase.hpp
, that header should be included by relative path#include <SourceModels/SourceModelBase.hpp>
. If you want to include the header with plain#include "SourceModelBase.hpp"
, then you should add${brt_SOURCE_DIR}/include/SourceModels
to the list of include directories. (This can be done either viatarget_include_directories
command, or by additional parameter forBASE_DIRS
).
This was my problem. I failed to assess the compiler errors and focused too much on what the IDE (not the compiler) was reporting. The build stopped failing as soon as I added ${brt_SOURCE_DIR}/include/SourceModels
and ${brt_SOURCE_DIR}/include/ListenerModels
(which had the same issue) to the target_include_directories command. The IDE is still complaining, but the build works fine.
Any idea how you solved this problem. I'm also facing the same issue?
Same error. I am unsure how it thought that I had multiple targets (main.exe), but I did a rm -rf build/
and that fixed everything.
# Makefile
$(BUILD)/main.exe: <dependencies>
./build_main.sh
Essentially: "In effect, a static nested class is behaviorally a top-level class that has been nested in another top-level class for packaging convenience." From Static nested class in Java, why?
Not work for me, I can't fins .after and .before in thymeleaf doc.
You can press Ctrl+f
in the file file tree (can be acsessed via SPC+f+t
). And then enter the name of the file.
Duda Nogueira from Weaviate here!
Can you confirm you are using Langchain and not Llamaindex?
For Langchain, you need to explicitly request this with;
docs = db.similarity_search("traditional food", return_uuids=True)
print(docs[0].metadata.get("uuid"))
I just noticed this is an undocumented feature! I have added it here in our langchain recipes to make it visible:
https://github.com/weaviate/recipes/tree/main/integrations/llm-frameworks/langchain/loading-data
Let me know if this helps!
Thanks!
Here's an example, without showing the cmd console:
@echo off
START mshta "javascript:var WshShell = new ActiveXObject("WScript.Shell");WshShell.Popup('Message...', 10, 'Title', 1 + 4096);close();" -flags1
exit
Try CPPFLAGS="-I/usr/include/freetype2"
just before your build commands.
Here's a simpler example for a MessageBox, without showing the cmd.exe console:
@echo off
START mshta "javascript:var WshShell = new ActiveXObject("WScript.Shell");WshShell.Popup('Message...', 10, 'Title', 1 + 4096);close();" -flags1
exit
I haven't written it up in detail (yet). But I mapped the examples from Tutorials - Scala.js to Mill v0.12.5 in Scala3 here.
instead of
ALTER table table_name modify COLUMN column_name varchar (size);
I used
ALTER table table_name modify column_name varchar (size);
Alt +shift +N basically helps to create the class package and etc with an ease but sometime when the preferences are set as other then it wouldnt work as expected , so you have to change it in the window option . Window -> Prespective ->open perspective --> Java . Hope it Help....😊
The answer from @ISanych is correct but incomplete. I spent almost an entire day trying to achieve what I know now is not possible using the Docker COPY
command. I read many issues similar - but not quite the same - as mine on the internet. I hope the explanation below will prevent another user from making the same mistake.
COPY will copy the contents of each arg dir into the destination dir. Not the arg dir itself, but the contents. https://github.com/moby/moby/issues/15858#issuecomment-136552399
I don't know this user but this is the most authoritative and clear description of the difference in behavior that I have found (the emphasis is mine)
The OP has the following structure and would like to recursively copy the contents of the files
directory into a similarly named files
directory already existing on the image file-system.
files/
folder1/
file1
file2
folder2/
file1
file2
The OP used the following command
COPY files/* /files/
and was surprised to learn that rather than copying the two folders folder1
and folder2
and their contents into the files
directory the command actually copied the contents of folder1
and of folder2
into the files
directory. As explained by the moby developer above in the full response in the Github issue, the shell glob (*) will expand to pass in folder1
and folder2
to the COPY command which will then copy the contents of each of folder1
and folder2
into the destination folder.
If you use the approach suggested by @ISanych - COPY files/ /files/
- then the contents of the files
directory - that is the directories named folder1
and folder2
- will be copied into the destination /files/
directory as indicated.
Importantly, there does not seem to be any way to avoid this behavior of the COPY
command. This behavior is highlighted (as it was for me) in the situation where there are multiple source file structures similar to that described by the OP and the user wishes to copy several top-level directories recursively into the root directory on the target - in one go:
files1/
folder1/
file1
file2
folder2/
file1
file2
files2/
folder1/
file1
file2
folder2/
file1
file2
Using the docker COPY
command as you would the *nix cp
command will yield unexpected results. That is, writing the command COPY files1 files2 .
will not behave in the same way as a similar GNU cp command. Instead the COPY command shown will yield the result in which you have the contents of files1/folder1
and files2/folder1
merged into a single folder1
on the destination and you are left with something like this and wondering what happened to the second (or first) directory?
folder1/
file1
file2
folder2/
file1
file2
If you wish to recursively copy the directories and their contents to a destination directory from the build context, the work-around is to create the destination directories first with the desired name and then issue a COPY for each of the top-level directories. E.g. for the files1 directory:
RUN mkdir ./files1
COPY files1 ./files1
RUN mkdir ./files2
COPY files2 ./files2
based on @njzk2 comment showing the specific documentation, the noUncheckedIndexedAccess
flag tells the typescript compiler to define index access as potentially undefined.
tsconfig.json
{
"compilerOptions": {
"noUncheckedIndexedAccess": true
}
}
One thing you need be care if you are using windows, use the unquote address like this.
kubectl proxy --address=0.0.0.0 --disable-filter=true
I have an exmple Think of Your Like a Pizza Box 🍕
When you order a pizza, the delivery guy hands you the whole box—not just a single slice, right? That’s exactly how selecting a works in JavaScript!
When you do:
let createNote = document.querySelector('.create-note');
You're not just getting an empty shell. Nope! You're getting the entire pizza—the heading (like extra cheese 🧀), the textarea (the saucy base 🍅), and the buttons (those spicy toppings 🌶️).
So when you change:
createNote.style.display = 'block';
It’s like opening the pizza box. BOOM! 🍕 You can now see all the delicious elements inside, even though you only grabbed the box.
Moral of the story? When you select a container element (like a ), it always holds its child elements—just like a pizza box holds your entire order. The display: none was just keeping the lid closed!
Enjoy your hot DOM knowledge! 🔥😆 Thats how it works
Alright, so this answer took me on quite the adventure. While I will be sharing some honourable mentions about some gotchas while using adobe commerce rest APIs, this answer will primarily just be about the OAuth 1.0a portion.
For anyone that couldn't find the official documentation:
"message": "Specified request cannot be processed."
check your API URL. The official documentation says you need to use a "Store ID" or use the "default store" in your URL.
{domain}/rest/{storeID}/V1/{theEndpoint}
or {domain}/rest/default/V1/{theEndpoint}
. This is not necessary for all stores (or perhaps my company's store is just weird like that). My URL ended up looking like {domain}/rest/V1/{theEndpoint}
.HMAC-SHA256
for your Signature Method and that you're adding your authorization as an Authorization
header not in URL params or the request body.Authorization
header. So this was a no go for me.
getAuthorizationHeader()
. This is the library I chose for Auth Signage.
Beta
... for 12 years according to git blame.HMAC-SHA256
. However, they have a Signer class for it that I used, so perhaps the documentation just hasn't been updated.<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client</artifactId>
<version>1.38.0</version>
Someone Somewhere (probably): "But this has many transitive vulnerabilities!" Correct! Luckily I was able to remove those by excluding certain child dependencies and updating the version of one dependency. Mine is in Maven format and works for my uses, but your mileage may vary.
<!-- Updated version of the excluded guava -->
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>33.4.0-jre</version>
</dependency>
<dependency>
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client</artifactId>
<version>1.38.0</version>
<exclusions>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
<!-- Removed seemingly without consequence -->
<exclusion>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</exclusion>
</exclusions>
</dependency>
I'm also using OkHttp3 for this as HTTP client.
<dependency>
<groupId>com.squareup.okhttp3</groupId>
<artifactId>okhttp</artifactId>
<version>4.12.0</version>
</dependency>
For (de)serialization I'm using Jackson.
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.18.2</version>
</dependency>
<dependency>
<artifactId>jackson-core</artifactId>
<groupId>com.fasterxml.jackson.core</groupId>
<version>2.18.2</version>
</dependency>
All of this code is in Kotlin (2.1.0) but should be pretty easily transferrable to Java as necessary.
Some handy extension functions.
// LogManager comes from Log4J
val <T : Any> T.logger: Logger get() = LogManager.getLogger()
// Request is from OkHttp3
fun Request.newBuilder(
builder: Request.Builder.(Request) -> Request.Builder
) = builder(newBuilder(), this).build()
I've truncated the names since all of them live in a singleton called OAuth1, but they would work separated with more qualified names just as easily.
import com.google.api.client.auth.oauth.OAuthHmacSha256Signer
import com.google.api.client.auth.oauth.OAuthParameters
import com.google.api.client.http.GenericUrl
import okhttp3.Request
import okhttp3.Response
object OAuth1 {
// I pull secrets from a remote secrets manager so these are
// just an easy way to standardize input usage. This uses the
// nomenclature from Postman to make more straightforward to
// go from Postman to this implementation.
interface Secrets {
val consumerKey: String
val consumerSecret: String
val accessToken: String
val tokenSecret: String
}
// A default implementation data class (not necessary to use).
data class DefaultSecrets(
override val consumerKey: String = "consumerKey",
override val consumerSecret: String = "consumerSecret",
override val accessToken: String = "accessToken",
override val tokenSecret: String = "tokenSecret"
) : Secrets
// Implementing this interface on a class should make it very easy to
// integrate OAuth1 in the class for usage.
interface UsesOAuth1 {
val oAuthSecrets: Secrets
// This is where the HMAC-SHA256 comes into play.
// Change this for different encryption types.
val signer
get() = OAuthHmacSha256Signer(oAuthSecrets.consumerSecret)
.apply { this.setTokenSecret(oAuthSecrets.tokenSecret) }
val oauthParams: OAuthParameters
get() = OAuthParameters().apply {
[email protected] = oAuthSecrets.consumerKey
[email protected] = oAuthSecrets.accessToken
[email protected] = [email protected]
}
}
// An OkHttp3 Interceptor that adds the Authorization header to each call
// created with a client it is added to.
class Interceptor(
private val oauthParams: OAuthParameters
) : okhttp3.Interceptor {
private fun Request.computeOAuth() {
oauthParams.run {
// Nonce, Timestamp, and signature need to be unique for each
// call and is therefore generated per intercept.
computeTimestamp()
computeNonce()
computeSignature(method, GenericUrl(url.toString()))
}
}
override fun intercept(
chain: okhttp3.Interceptor.Chain
): Response = chain.request().newBuilder {
logger.info("Calling with OAuth1: ${it.method}: ${it.url}")
it.computeOAuth()
addHeader("Authorization", oauthParams.getAuthorizationHeader())
}.let {
chain.proceed(it)
}
}
}
So that's all the OAuth stuff at a high level. However, I'll go ahead and share the prototype implementation I have with he HTTP client for anyone that just wants a copy, paste, and go.
Some more handy extension functions.
// HttpUrl is from OkHttp3
fun HttpUrl.newBuilder(
builder: HttpUrl.Builder.() -> HttpUrl.Builder
) = builder(newBuilder()).build()
inline fun <reified T> String.tryParseAsJson(): T? =
runCatching { parseAsJson<T>() }.onFailure {
val type = T::class.simpleName?.let { " ($it)" } ?: ""
logger.warn("Failed to parse JSON$type. String: $this")
}.getOrNull()
inline fun <reified T> String.parseAsJson(): T =
jacksonObjectMapper.readValue(this, T::class.java)
val jacksonObjectMapper: ObjectMapper by lazy {
ObjectMapper()
.registerModule(jacksonKotlinModule)
.findAndRegisterModules()
}
private val jacksonKotlinModule by lazy {
KotlinModule.Builder()
.withReflectionCacheSize(512)
.configure(KotlinFeature.NullToEmptyCollection, false)
.configure(KotlinFeature.NullToEmptyMap, false)
.configure(KotlinFeature.NullIsSameAsDefault, false)
.configure(KotlinFeature.SingletonSupport, false)
.configure(KotlinFeature.StrictNullChecks, false)
.build()
}
Below is the abstract base API client.
import okhttp3.HttpUrl
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.Response
abstract class BaseApiClient {
abstract val baseUrl: HttpUrl
abstract val httpClient: OkHttpClient
abstract fun handler(response: Response)
inline fun <reified T> Request.executeWithHandling(): T? {
val response = httpClient.newCall(this).execute()
if (!response.isSuccessful) handler(response)
// For now this will just parse JSON.
// However, in the future this could use
// body.mediaType to dynamically parse the T type.
return response.body?.string()?.tryParseAsJson<T>()
}
fun String.get(): Request = Request.Builder().apply {
[email protected]()
url(baseUrl.newBuilder { addPathSegments(this@get) })
}.build()
}
Below is an implementation of the BaseApiClient
into a "MagentoApiClient" specific to adobe commerce. Note, this only has a single endpoint and the baseUrl
has been scrubbed for confidentiality reasons.
class MagentoApiClient(
override val oAuthSecrets: OAuth1.Secrets,
override val baseUrl: HttpUrl = "https://#{yourStoreDomain}/rest/V1/".toHttpUrl(),
) : BaseApiClient(), OAuth1.UsesOAuth1 {
override val httpClient: OkHttpClient = OkHttpClient.Builder()
.addInterceptor(OAuth1.Interceptor(oauthParams))
.build()
override fun handler(response: Response) {
// The only reason this function would be called is if the response code is not 2xx or 3xx.
response.body?.string()?.let { error ->
MagentoApiExceptionHandler.handle(error)
}
}
fun customerById(id: Int): MagentoCustomer? =
"customers/$id".get().executeWithHandling()
}
Usage:
fun main() {
// Please don't commit your secrets to source code,
// and please make sure to pull them from a secure source.
val magentoApi = MagentoApiClient(OAuth1.DefaultSecrets())
val response = magentoApi.customerById(15)
println(response)
}
Alas I'm unable to share the data model for MagentoCustomer
or the exception handler code. However, this should be able to get you up and running from almost nothing. Feel free to throw any questions in my general direction, and I'll answer as much as I can.
You need to set the draggable
option to false
i.e.:
import { Carousel } from '@mantine/carousel';
function Demo() {
return (
<Carousel draggable={false}>
{/* ...slides */}
</Carousel>
);
}
you can rank your website by SEO optimization of your keywords, meta tags, title etc. every page must be full optimized and well arranged. recently I'm working on my p[personal website SONIC MENU WITH PRICES to get ranked in google using SEO techniwues
I've figure out that you can achieve that by slicing a slice
const std = @import("std");
fn runtimeNumber() u16 {
return 32;
}
fn func(b: [4]u8) void {
std.debug.print("{any}", .{b});
}
pub fn main() void {
var buf: [8096]u8 = undefined;
const n = runtimeNumber();
func(buf[n..][0..4].*);
}
I find this the easiest:
:%norm A,
I've been working with expressing constraints using SolverAdd in VBA and have encountered the "missing constraints" bug as others.
It would perhaps be helpful to know: When the problem is encountered AND when the problem appears to be solved AND PERHAPS when Excel and a workbook is opened: What is the state of the constraints in the Solver Parameter dialog? i.e.Open Data, Solver to get the Solver Parameters dialog and examine the Constraints list there. This appears to be the only way to really compare results.
Daily budget alerts are still not supported by GCP as of Feb 2025. One decent workaround in my opinion is to set a monthly budget and alert on forecasted figures. That won't be as good as a daily spend alert but will trigger much faster that an alert based on actual spend.
It was added in version 2024.3
Where is your code? Please add the code for us to review 🙃
I'm also looking for how to measure latency.. the only solution that came to mind that didn't work very well was to have the server writing the timestamp in the frames and the client reading it and using an OCR to read this timestamp and calculate the difference to get the latency. I'm now seeing if there's a way to package the timestamp with RTSP so I can extract it on the client.
Yeeees that is the solution for this AVD error appears with W10 and W11 (both new installled) on same Lenovo YOGA 530 with 32 GB RAM and fresh 1TB M.2 SSD. I tried any what found only install MS Visual C++ Redistrbutable helps - thank you!!!
The easiest workaround is to rename the container, assuming you're running some docker build+run or docker-compose.
Depending on your workflow, consider post-test removal/pruning in particular with cancelled(), failure() or always(). If you're using self-hosted runners, login to that system and kill the container. For Github-hosted runners, I don't think there is any access to them. You'll have to wait until the default timeout kills the container.
Ensure you're using a compatible Java version (Java 8 or later), increase the memory allocation by modifying the weka.ini
file with -Xmx2g
, and check for any background processes consuming system resources. You can also try disabling antivirus/firewall temporarily, run it via the command line for faster startup. If these steps don't work, reinstalling WEKA may help resolve any issues.
I had the same problem, and it turned out to be network related. I had to add a VPC endpoint for Athena and make sure that port 444 was open. That resolved it for me. Turning on debug level error logging was the key that showed me the full error.
Try comparing and using this to fix your code. Let's give an example:
const span = document.createElement("span")
const text = `<p>Hello</p>`
span.innerHTML = text
document.getElementById("textArea").innerHTML = span
Your code works for me without any issue. The only thing I could not test was your shp, however.
This trigger works change after to before and take out the 'of email'
Thanks!
delete from Players where not id in (select PlayerID from Games)
There are more answers to using re2 on this question. There is another package that is more up to date for Python 3, https://pypi.org/project/google-re2/. I simply pip installed it and it works. It imports as re2, not google_re2 like you might think.
In my case I updated the php version of Xampp (Windows) manually from 8.2.12 to 8.2.27 replacing the files in the xampp/php folder with the ones downloaded from php.net
The problem I had was that the console gave me the ok version when I ran a php -v but when I printed a phpinfo() it gave me the old version...
Then I found that the non-thread-safe version does not include 2 files that seem to have to do with the version that the phpinfo() function shows; php8apache2_4.dll php8ts.dll
What I did was download the Thread-Safe version (instead of the non-thread-safe one) and replace it again.
After that everything worked fine and gave me the correct versions in both environments.
Download PHP Thread Safe from official page https://windows.php.net/download/
This package contain phpXapache2_X.dll
Don't download phpXapache2_X.dll from other source. It can be other version, or x86
I had a similar problem, to fix I added a format string to my app.module.ts imports like this:
NbDateFnsDateModule.forRoot({
format: 'MMM dd, yyyy' // Errors without this line
}),
you can put these commands direclty on ur terminal everytime u use flutter it works smoothly
$Env:NO_PROXY="127.0.0.1,localhost,::1"
$Env:https_proxy="http://<username>:<passowrd>@<domain name>:<port number>"
$Env:http_proxy="http://<username>:<passowrd>@<domain name>:<port number>"
Found the solution, thanks for all your help everybody! I will be upfront with you I am VERY BAD at using most debugging tools and stepping through the program with stops, but I recently became decent at using the Logcat, so the way I debugged this was by drawing it out on pencil and paper and visually mapping out the relationships of my variables with a Timeline Diagram!
I was going to write down the numbers and everything, but just in drawing this, it occurred to me that to my astonishment, I didn't have too many variables (which is definitely the way it seems, trust me I know) I was actually MISSING an important variable! The "notification threshold" as it were, and the respective "daysBeforeExpMillis" to calculate it. Folks were pointing out the inconsistency between days notice on one hand vs. days before exp on the other hand which I admit is extremely confusing cause it sounds like "days_before_exp" should be calculated via the current time. In reality there are two time ranges/windows one is the days notice window which is the bigger one that is fixed and determined by the user. The other window is the "days_before_exp" according-to-your-reminders window, which is not fixed, but adjusts itself based on how close to expiration we are.
So the solution was to put back in my days_until_exp-setting if statement WITH this new notification threshold and then my notifications started behaving correctly:
Log.d("ITEM NAME:", "ITEM NAME: " + name);
String expDateString = item.getString("expDate");
Date expDate = sdf.parse(expDateString); // Set to end of day (23:59:59)
long expDateMillis = expDate.getTime();
int daysNotice = item.getInt("daysNotice");
long daysNoticeMillis = daysNotice * MILLIS_IN_A_DAY;
long expirationThreshold = expDateMillis - daysNoticeMillis;
long currentTimeMillis = System.currentTimeMillis();
if (currentTimeMillis < (expDateMillis - (14 * MILLIS_IN_A_DAY))) {
// Far from expiring (more than DAYS_BEFORE_EXPIRY days away) – schedule for 14 days before expiry.
DAYS_BEFORE_EXPIRY = 14;
} else if (currentTimeMillis < (expDateMillis - (7 * MILLIS_IN_A_DAY))) {
// Between 14 and 7 days before expiry – schedule for 7 days before expiry.
DAYS_BEFORE_EXPIRY = 7;
} else if (currentTimeMillis < (expDateMillis - (3 * MILLIS_IN_A_DAY))) {
// Between 7 and 3 days before expiry – schedule for 3 days before expiry.
DAYS_BEFORE_EXPIRY = 3;
} else if (currentTimeMillis < expirationThreshold) {
// Less than 3 days away but still before the user's notice threshold – schedule for 1 day before expiry.
}
**long daysBeforeExpMillis = (MILLIS_IN_A_DAY * DAYS_BEFORE_EXPIRY);**
**long notificationThreshold = expDateMillis - daysBeforeExpMillis;**
**notificationThreshold = expDateMillis - daysBeforeExpMillis;**
Log.d("EXP DATE MILLIS:", "EXP DATE MILLIS: " + String.valueOf(expDateMillis));
Log.d("DAYS NOTICE:", "DAYS NOTICE: " + String.valueOf(daysNotice));
Log.d("DAYS NOTICE:", "DAYS NOTICE MILLIS: " + String.valueOf(daysNoticeMillis));
Log.d("DAYS BEFORE EXP:", "DAYS BEFORE EXP: " + String.valueOf(DAYS_BEFORE_EXPIRY));
Log.d("DAYS BEFORE EXP:", "DAYS BEFORE EXP MILLIS: " + String.valueOf(daysBeforeExpMillis));
Log.d("NOTIFY TIME MILLIS:","NOTIFY TIME MILLIS: " + String.valueOf(notificationThreshold));
Here are my much cleaner logs that I was able to retrieve after debugging the issue:
2025-02-28 13:19:10.843 6370-6426 ITEM NAME: sma...tory.smartinventorywithsearch D ITEM NAME: Elder Scrolls V: Skyrim (Xbox 360 / PS3 / PC)
2025-02-28 13:19:10.843 6370-6370 AutofillManager sma...tory.smartinventorywithsearch D view not autofillable - not passing ime action check
2025-02-28 13:19:10.844 6370-6426 EXP DATE MILLIS: sma...tory.smartinventorywithsearch D EXP DATE MILLIS: 1741579200000
2025-02-28 13:19:10.845 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE: 100
2025-02-28 13:19:10.845 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE MILLIS: 8640000000
2025-02-28 13:19:10.845 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP: 7
2025-02-28 13:19:10.845 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP MILLIS: 604800000
2025-02-28 13:19:10.845 6370-6426 NOTIFY TIME MILLIS: sma...tory.smartinventorywithsearch D NOTIFY TIME MILLIS: 1740974400000
2025-02-28 13:19:10.845 6370-6426 Debug sma...tory.smartinventorywithsearch D Elder Scrolls V: Skyrim (Xbox 360 / PS3 / PC) will expire soon!
2025-02-28 13:19:10.846 6370-6426 Alarm sma...tory.smartinventorywithsearch D Scheduled notification for Elder Scrolls V: Skyrim (Xbox 360 / PS3 / PC) at 1740974400000
2025-02-28 13:19:10.846 6370-6426 ITEM NAME: sma...tory.smartinventorywithsearch D ITEM NAME: Irish Penny Whistle - Key Of D | Book | Condition Good
2025-02-28 13:19:10.847 6370-6426 EXP DATE MILLIS: sma...tory.smartinventorywithsearch D EXP DATE MILLIS: 1740546000000
2025-02-28 13:19:10.847 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE: 80
2025-02-28 13:19:10.847 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE MILLIS: 6912000000
2025-02-28 13:19:10.847 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP: 7
2025-02-28 13:19:10.847 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP MILLIS: 604800000
2025-02-28 13:19:10.847 6370-6426 NOTIFY TIME MILLIS: sma...tory.smartinventorywithsearch D NOTIFY TIME MILLIS: 1739941200000
2025-02-28 13:19:10.847 6370-6426 Debug sma...tory.smartinventorywithsearch D Irish Penny Whistle - Key Of D | Book | Condition Good has expired!
2025-02-28 13:19:10.848 6370-6426 Alarm sma...tory.smartinventorywithsearch D Scheduled notification for Irish Penny Whistle - Key Of D | Book | Condition Good at 1739941200000
2025-02-28 13:19:10.848 6370-6426 ITEM NAME: sma...tory.smartinventorywithsearch D ITEM NAME: coke
2025-02-28 13:19:10.849 6370-6426 EXP DATE MILLIS: sma...tory.smartinventorywithsearch D EXP DATE MILLIS: 1558324800000
2025-02-28 13:19:10.849 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE: 90
2025-02-28 13:19:10.849 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE MILLIS: 7776000000
2025-02-28 13:19:10.849 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP: 7
2025-02-28 13:19:10.849 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP MILLIS: 604800000
2025-02-28 13:19:10.849 6370-6426 NOTIFY TIME MILLIS: sma...tory.smartinventorywithsearch D NOTIFY TIME MILLIS: 1557720000000
2025-02-28 13:19:10.849 6370-6426 Debug sma...tory.smartinventorywithsearch D coke has expired!
2025-02-28 13:19:10.850 6370-6426 Alarm sma...tory.smartinventorywithsearch D Scheduled notification for coke at 1557720000000
2025-02-28 13:19:10.854 6370-6426 ITEM NAME: sma...tory.smartinventorywithsearch D ITEM NAME: Ratchet & Clank 2 for PlayStation 2
2025-02-28 13:19:10.855 6370-6426 EXP DATE MILLIS: sma...tory.smartinventorywithsearch D EXP DATE MILLIS: 1741064400000
2025-02-28 13:19:10.855 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE: 250
2025-02-28 13:19:10.855 6370-6426 DAYS NOTICE: sma...tory.smartinventorywithsearch D DAYS NOTICE MILLIS: 21600000000
2025-02-28 13:19:10.855 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP: 3
2025-02-28 13:19:10.855 6370-6426 DAYS BEFORE EXP: sma...tory.smartinventorywithsearch D DAYS BEFORE EXP MILLIS: 259200000
2025-02-28 13:19:10.855 6370-6426 NOTIFY TIME MILLIS: sma...tory.smartinventorywithsearch D NOTIFY TIME MILLIS: 1740805200000
2025-02-28 13:19:10.855 6370-6426 Debug sma...tory.smartinventorywithsearch D Ratchet & Clank 2 for PlayStation 2 will expire soon!
2025-02-28 13:19:10.858 6370-6426 Alarm sma...tory.smartinventorywithsearch D Scheduled notification for Ratchet & Clank 2 for PlayStation 2 at 1740805200000
...
2025-02-28 13:19:15.860 6370-6370 Notification sma...tory.smartinventorywithsearch D Notification received! // Irish Penny Whistle (expired)
2025-02-28 13:19:15.883 6370-6370 Notification sma...tory.smartinventorywithsearch D Notification received! // Coke (expired)
Now the notifications only fire prematurely if an item is already expired -which happens to be exactly the behavior I wanted anyway so it's a wrap! 😁
Again I really appreciate your folks' help, have a good one everybody!
Stop running scheme or application and try again after it.
I found the same issue after upgrading hibernate. This document https://docs.jboss.org/hibernate/orm/6.4/querylanguage/html_single/Hibernate_Query_Language.html#datetime-literals shows how to format the stirng literal.
In your example
wt.endDate = '9999-12-31 23:59:00'
should look like
wt.endDate = {ts '9999-12-31 23:59:00' }
I usually just cp thatfile to /tmp as the new owner and then move it wherever you want. That cp to /tmp on a 777 file owned by another owner seems to change the file ownership.
When executing an ALTER TABLE statement with multiple operations, the order of execution is not explicitly guaranteed by the documentation. While MySQL generally processes these operations in the order they are specified, relying on this behavior is not advisable for critical operations where the sequence is important.
Given this ambiguity, for operations where the order is crucial—such as adding a composite index before dropping an existing single-column index to ensure query performance is maintained—it is recommended to execute these actions in separate ALTER TABLE statements. This approach ensures that each operation is completed successfully before the next one begins, thereby minimizing potential performance issues.
In MySQL 8.0 and later, the introduction of Online Data Definition Language (DDL) operations has significantly improved the ability to perform schema modifications with minimal locking and downtime. These enhancements allow for many alterations, such as adding or dropping indexes, to be executed without requiring a full table rebuild or extensive locking, thereby reducing the impact on concurrent data manipulation operations.
The reason align-items: baseline; works with flex-direction: row; but not with flex-direction: column; is due to how the flexbox cross-axis behaves. When using flex-direction: row;, the cross-axis is vertical, so align-items: baseline; aligns the content based on the text baseline (the line where the text sits) vertically. This is why you see the text of all items aligning correctly. However, when you switch to flex-direction: column;, the cross-axis becomes horizontal, and since the text is already aligned horizontally by default, using align-items: baseline; does not create any visible difference. If you want to align items by their baselines in a column layout, you should use justify-content: baseline; instead, as it works along the main axis (which is vertical in a column). Alternatively, you can manually adjust the alignment by adding margin-left or padding-left to each item.
After some testing, I was able to generate a working script to block this, and it wasn't difficult.
I posted it here and explained how it works.
https://gist.github.com/jesussuarz/1b3d93236fc9bae113076d3bb3ee7a84
i got same problem but import from "@auth0/nextjs-auth0/client" still new error 'Cannot find module '@auth0/nextjs-auth0/client' or its corresponding type declarations.
I use numberBetween instead of rand : "category_id"=> $this->faker->numberBetween(1,5),
I was facing the same issue, under went the recommended approach of disabling the extensions but this time it was the "Code Runner" extension which was over ruling my VS Code settings. I disabled the "Code Runner" extension and checked it back. BINGO.
Tried enabling and disabling to see if it was actual extension creating the mess and yep it was it but caught this time.
Yes, it is safe to say that the service account key (gcloud-service-account.json) can no longer be used by anyone. The “invalid_grant: Invalid JWT Signature” error basically means that the service account key has expired or deleted.
To fix this, kindly follow this documentation: How to Fix | Adding New Service Account
I also ran into a similar issue with my Express.js API and my React.js client app. I use various AI tools to help me code and when building my vercel.json, it gave me a similar schema. The problem is that this format is actually a legacy schema for vercel.json files and they have moved to a different schema explained here. My best guess for your situation based on this documentation would be to change your vercel.json to this:
{
"builds": [
{
"src": "backend/wsgi.py",
"use": "@vercel/python",
"config": {
"maxLambdaSize": "15mb"
}
}
],
"rewrites": [
{
"source": "/(.)",
"destination": "backend/wsgi.py"
}
],
"headers": [
{
"source": "/(.)",
"headers": [
{
"key": "Access-Control-Allow-Credentials",
"value": "true"
},
{
"key": "Access-Control-Allow-Origin",
"value": "YOUR_VERCEL_FRONTEND_URL"
},
{
"key": "Access-Control-Allow-Methods",
"value": "GET,OPTIONS,PATCH,DELETE,POST,PUT"
},
{
"key": "Access-Control-Allow-Headers",
"value": "X-CSRF-Token, X-Requested-With, Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version, Authorization"
}
]
}
]
}
I am not sure if you were attempting to find a fix for the CORS issue by allowing all origins, or if you really did want to allow all origins, but I would highly recommend against that for security reasons. This solution ensures that only your client react app is the only origin that can communicate with your backend server.
Also be sure to update your settings.py to include these more secure changes:
# CORS_ALLOW_ALL_ORIGINS =True # Remove this
CORS_ALLOWED_ORIGINS = [
"http://localhost:3000", # React development server
"YOUR_VERCEL_FRONTEND_URL" # Your deployed frontend URL
]
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOW_METHODS = [
'DELETE',
'GET',
'OPTIONS',
'PATCH',
'POST',
'PUT',
]
CORS_ALLOW_HEADERS = [
'accept',
'accept-encoding',
'authorization',
'content-type',
'dnt',
'origin',
'user-agent',
'x-csrftoken',
'x-requested-with',
]
# ALLOWED_HOSTS = ["*"] # Remove this
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'api',
'corsheaders',
'rest_framework',
'rest_framework_simplejwt',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
This should hopefully get you closer to having a successful deployment. Let me know how it goes!
According to the Junit documentation, the way to set a custom classloader for tests in Junit 5 is to create a LauncherInterceptor and configure it using by setting a system property e.g. in gradle:
test {
systemProperty(”junit.platform.launcher.interceptors.enabled“, true)
// ...
}
Then then LauncherInterceptor needs to be registered by adding it to /META-INF/services/org.junit.platform.launcher.LauncherDiscoveryListener
Thanks guys I followed the same steps and it got solved for me
The _core directory started appearing in numpy >= 1.26.
In my case, I generated a model with a newer version of scikit-learn but tried to load it with an older version.
Updating the scikit-learn version of the loader solved the issue for me.
Try pressing "Alt+S". You should see a menu pop up. Click "Script Editor", and find the option "Font" on the right. Click the font you want, and then press "OK". Hope this helps! :D
how are you running your app? what's the exact command you are using to start it?
I had the same issue as you and was able to use the debugger console after running the app with "npx expo run:ios"
You can find how to tune Gemini here: https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning
The SDK syntax you are using is indeed from deprecated bison
For a react native app, this config was missing on ios under AppDelegate.m didFinishLaunchingWithOptions:
[FIRApp configure];
Full docs : https://rnfirebase.io/#configure-firebase-with-ios-credentials
Please share the example inputs & outputs. Also share errors encountered.
I am making an awk that fills an array from a secondary file to later make a match with an IF statement. How can I arrange my AWK to do it from a BEGIN statement?
En Server 2025, también hay que habilitarlo
Here is how i managed running ssr only on production:
angular.json
app.config.server.ts
main.server.ts
Angular 19.2
What would your approach when you need different timeframes e.g. 4 of 40 symbols? E.g. if you want to know 4*40 atr? The only way you could achieve this is to calculate it by your own and using a lower timeframe, right?
Remote Views do not support constraintLayout , refer here for a list of layouts and view supported https://developer.android.com/reference/android/widget/RemoteViews
Have you closed and reopened your terminal? The dotnet tool adds the install location to your PATH but that change doesn't take effect until you start a new session.
Ok so the answer, courtesy of herrstrietzel was to add the viewbox information to the inserted code, this allowed the SVG to scale with the div properly, as (he said) the viewbox information in not copied from the svg definition.
I also had an additional issue that was resolved by adding display: block;
to my CSS for the SVG, this removed the additional height added around the SVG and removed the remaining display issues. (this was also the issue when hardcoding the width and height).
Late to the game, but this is what solved it for me. In the Program.cs adding this line
app.MapDefaultControllerRoute();
Then had this in my cshtml file
<a class="nav-link text-dark" asp-controller="Home" asp-action="Index">Home</a>
Please Refer to the below video: https://youtu.be/ipkkmr0hHMs?si=021Nf0M0e9Beibsr
If you came here looking for how to do the opposite, like me, here's what I came up with:
Where newTime
format is "hh:mm a"
, e.g. "12:00 AM"
:
const [timeOnly, period] = newTime.split(' ');
const [hoursStr, minutes] = timeOnly.split(':');
const hoursNum = parseInt(hoursStr);
const hours = period === 'PM' ? ((hoursNum % 12) + 12) % 24 : (hoursNum % 12) % 24;
The only edge case it doesn't handle is if newTime === "0:00 PM"
. Otherwise it works well.
@syntax-junkie 's answer really helped me figure this out!
There isn't really a build-in way in SQL to create sequential data like a range of dates (unless you're using SQL Server 2022 or higher, then you have GENERATE_SERIES
, which was recently added). Otherwise you have to accept the fact that there will be gaps for dates where there were no created or completed orders.
I've seen a few solutions to overcome that problem (see this Stackoverflow question). My solution to your problem:
with dates(Date) as (
select cast(GETDATE() as date)
union all
select dateadd(day, -1, Date)
from dates
where Date >= dateadd(day, -30, cast(GETDATE() as date))
), created_orders as (
select
cast(CreatedAt as Date) as CreatedAt,
count(*) as CreatedCount
from [tt].[order]
group by cast(StartDate as date)
), completed_orders as (
select
cast(CompletedAt as Date) as CompletedAt,
count(*) as CompletedCount
from [tt].[order]
group by cast(StartDate as date)
)
select
d.Date,
isnull(cr.CreatedCount, 0) as Created,
isnull(co.CompletedCount, 0) as Completed
from dates d
left join created_orders cr on cr.CreatedAt = d.Date
left join completed_orders co on co.CompletedAt = d.Date
Dates CTE is an example solution to fill the gaps. As you can see I created separate CTEs for created and completed orders. You cannot group by both dates at the same time as the output would be a cartesian product of those two columns. You need to group by them separately and only later join the results.
solved? i have the same situation
So I realised my first try above did not take indentation into account. Here is the snippet I came up with. It's good enough for my purpose. So far, in spite of the limitations described below, I've been able to use it in a satisfactory manner.
[
{
"key": "alt+shift+a",
"command": "editor.action.insertSnippet",
"args": {
// Indentation-saving snippet
// MATCH
// ^ Beginning of selected text
// ( *) $1 is a possible indentation that is to be reused for delimiters and content
// ('''\n *)? $2 is a possible opening delimiter, including following line break and indentation
// (.*?) $3 is a possible content (non-greedy)
// (\n *''')? $4 is a possible closing delimiter, including previous line break and indentation
// $ End of selected text
// SUBSTITUTION
// $1 If there was an indentation, indent the opening delimiter
// ${2:?\n:'''\n} If there was an opening delimiter, replace it with a line break, otherwise make a delimiter and line break
// $1 If there was an indentation, indent the content
// $3\n The content and a line break
// $1 If there was an indentation, indent the closing delimiter
// ${4:?\n:'''} If there was a closing delimiter, replace it with a line break, otherwise make a linebreak and a delimiter
// LIMITATIONS
// The selection must be the entire lines of the content to comment in / out
// Line breaks are created because we can't replace a group by nothing in this REGEX implementation
"snippet": "${TM_SELECTED_TEXT/^( *)('''\n *)?(.*?)(\n *''')?$/$1${2:?\n:'''\n}$1$3\n$1${4:?\n:'''}/s}"
},
"when": "editorTextFocus && editorHasSelection && editorLangId == 'python'"
}
]
My settings generate indentation as spaces, but I assume it could be adapted to tabs.
I'm accepting @rich neadle's reply as the answer, namely this did answer the initial question:
I am not aware of if-else if-else if-else then replacements that I could build with regex
Open your project/app in android studio run command flutter clean run command flutter pub cache repair Click on Menu File - Invalidate caches Check all the boxes and click on button Invalidate and Restart Once restart android studio will reconfigure everything run command flutter pub upgrade.
Use Gradle Docker plugin instead.
you should not use SectionList at all. Try to use Flatlist with header for general, data for cars, footer for other. Or you could wrap screen in ScrollView and simple map cars to views. SectionList should be used for grouping identical elements
I found a no-registration .sav to excel converter - quick, easy, and doesn’t send spam messages afterwards :P
Maybe could help https://converter.andre.ai/sav-to-excel
My solution was to not use the using statement for the Excel or Outlook MS Office services I wanted.
On line 18, maybe try script:GetDescendants()
, this also gets the children of the children. And if you want it to account for MeshParts and other types of part s, on line 20 you should use if brick:IsA("BasePart")
The issue with this was that it was a Local Service fabric application and Service Fabric realm doesnot recognize beyond the SfAppCluster level, hence was not able to pick up the Visual studio credentials
What we did was to go the service principal route and set our environment variables ( https://learn.microsoft.com/en-us/dotnet/azure/sdk/authentication/local-development-service-principal?tabs=azure-portal%2Cwindows%2Ccommand-line )
And placing the S_+NI cert on the SfAppCluster folder localy
I second this question, and to the "why do you need this" posters I say "not everyone uses Visual Studio as their editor; we don't, but when debugging code it is too easy to accidentally modify the source file, which we don't want. I'm not in Visual Studio for any other reason than to debug my code, for reasons that are irrelevant to the question.
Every editor I've ever used has the ability to open files read-only, so why is this question being scorned? I want my debugger to have an option to always open source files read-only, and will manage the edits to the file elsewhere.
Most common example: sometimes the wrong window is active while I'm debugging, and I accidentally type into the source file via the debugger, not into the intended window (be it email or a DOS command prompt or whatever). Does nobody else's active window switch out from under them unexpectedly?
Maybe the issue is what error message says invalid gridOptions property 'gridOptions'
. Check this input @Input() gridOptions!: GridOptions;
of GridTableComponent
, what do you have in there. Probably there're coming wrong input like { gridOptions: {...} }
The CPU profile can't be processed by --prof-process but it can be loaded into the developer tools of a browser in the "Performance" tab and viewed there. Would be nice if the docs made that a bit more obvious!
I was this error with Webpack Encore on a Symfony project, to solve it you need to add this options in your webpack.config.js:
Encore
.enableTypeScriptLoader(function(tsConfig) {
tsConfig.transpileOnly = true;
})
;
This is due to compatibility issues between older versions of Python and Django when handling function signatures.. for example, related_lookup_fields is Deprecated in Django Admin such that the part related_lookup_fields = { 'fk': ['article'], }
should be replaced by autocomplete_fields = ["article"]
or just remove it entirely. So in a nutshell this will fix only THAT part and you'd have to go over each section of the code i.e. just move over to newer version
To rename a repo on git that is cloned to your folder (project) on your local drive (with the same name of course) while preserving the connection between them follow these steps.
1- Rename your remote repo on GitHub/GitLab: Go to your repo settings → Rename it to
2- Rename the local folder on your machine directly on the folder name or in bash: mv
3- Now, in a local terminal go into the renamed local folder and update Git’s remote URL using: cd <path to the folder's new name> git remote set-url origin <new_repo_url>
Note: don't forget to replace <new_repo_url> with the new repo URL from GitHub/GitLab, which includes the updated name.
4- Finally, you might want to verify the connection is h: git remote -v
If it shows the correct new URL, we're all set!
Environment variables are loaded by the execution environment when instantiated. So for example if you start two terminals, use one to set an environment variable, then try to read it with the other, it will not be visible.
I am running into a similar issue. Has this ever been resolved?
If you have multiple Python environments (e.g., using virtualenv, conda, etc.), make sure that you're working in the correct environment where the package is installed. You can verify the Python environment being used by running: which python # On macOS/Linux where python # On Windows
You can use a different file extension like .cmd when exporting and then rename the file extension as .xlsx. This will prevent it from opening and then make it where it’s still an Excel file in the end.
Check whether gradle.properties exists and is not added to .gitignore. It was cause of the issue in our case.
Finally! Someone who's also run into this issue. Did you ever get it figured out? With this setup, notifications from firebase cloud messaging or libraries that use it under the hood do work. So my theory is that the FCM notification handler basically overrides Expo's, and Expo's handlers stop receiving incoming notifications. No idea yet how to get them to both work.
You can use the 'pause_collection' parameter and set 'behaviour' to 'keep_as_draft' to keep the subscription active. And add an automatic resume date with 'resumes_at' and your new date.
https://docs.stripe.com/billing/subscriptions/pause-payment
For example : My customer has 1 year subscription, form 1st january 20XX to 1st january 20XX+1, i want to offer him 30 days free. His subscription will run to 31st january.
I set pause_collection and keep his sub active. Next, i set his next facturation to a new date, that i set in 'resumes_at'.
There is no proration by doing so.
This stackblitz from this question was useful for me
It's not an issue with field caching or mapping; rather, it's about maintaining the consistency of the AI Search physical structure throughout its lifetime. The definitions of these fields, except for "Retrievable," will consume more storage and significantly impact search behavior. Therefore, rebuilding your index is the only viable option.
Although you can add new fields at any time, existing field definitions are locked in for the lifetime of the index. For this reason, developers typically use the Azure portal for creating simple indexes, testing ideas, or using the Azure portal pages to look up a setting. Frequent iteration over an index design is more efficient if you follow a code-based approach so that you can rebuild the index easily.
https://learn.microsoft.com/en-us/azure/search/search-what-is-an-index#field-attributes
Do I need to reset or delete the indexer before the new fields can populate? => Yes, rebuild it.
Is there a way to force Azure Search to re-index only the new fields? => The new fields will follow the initial fields definitions (schema)
Could this be due to indexer caching or an issue with field mappings? => No
disableVirtualization={true} for datagrid soleved problem ;)
Before re-scheduling the script, make sure to save the invoice and exit this avoid process the same invoice in multiple schedule script instances at the same time.