let defaultStyle = {
one: 1,
two: 2,
three: 3
}
function styling(style = defaultStyle, ...ruleSetStock) {
return ruleSetStock.map(ruleSet => {
console.log(ruleSet)
return style[ruleSet]
})
}
console.log(styling(undefined, "one", "two", "three"))
Apart from the personal workbook mentioned in the comments, you can also use VBA code to export/import modules, see e.g. this SO post: Mass importing modules & references in VBA
With no explain
and no info on data distribution regarding your date
, detail
and status
it's difficult to say (for date
: could you give an approximate of the number of rows over the 3-months period, so that we know how many percents they represent of the total 65 million rows?),
but by decreasing order of probability I would say:
between
First of all: can you have future dates in loginaudit
?. If not, instead of Cast_as(date) BETWEEN [3 months ago] AND [today]
,
just Cast_as(date) >= [3 months ago]
: you'll spare a comparison for each row (BETWEEN
is date >= [start] AND date <= [today]
, so if your data is always <= [today]
, do not let your RDMS look for it).
Then an index over date
would allow the RDBMS to quickly limit the number of rows to inspect for further tests.
However each row's date
has to be passed through Cast_as()
to return a value to be compared, so an index on the column would be useless (the index on column is only used if you directly compare the column to some bounds).
You could improve your query by creating by creating a function index:
CREATE INDEX la_date_trunc_ix ON loginaudit (builtin.Cast_as(loginaudit.date, 'TIMESTAMP_TZ_TRUNCED'));
But /!\ this will be more costly than a direct index on the column, and given the number of log entries you write per day you perhaps do not want to slow down your insert
s too much. You'll have to find a balance between a simple index on date
(will slow down writes too, but not as much) and a function index (will slow down writes more; but be a bit smaller and quicker on reads).
But if we read it further, you're truncating the row's date
to TIMESTAMP_TZ_TRUNCED
(truncates 2025-03-15 15:05:17.587282
to 2025-03-15 00:00:00.000000
?),
to compare it to something described as DATETIME_AS_DATE
(so probably 2025-03-15
).
So, be it 2025-03-15 15:05:17.587282
or 2025-03-15 00:00:00.000000
, both are < 2025-03-15
:
builtin.Cast_as
is useless, just directly rewrite your cond to loginaudit.date >= builtin.Relative_ranges('mago2', 'END', 'DATETIME_AS_DATE')
.
… And of course do not forget to have a (simple!) index on loginaudit(date)
.
(and then depending on the ratio of old data compared to 3 last months' data, maybe partitioning by month, but let's first try with correct index use.
Relative_ranges
is stableI hope SuiteQL delivers its builtin
functions as stable; you should ensure it.
If it is stable, the RDBMS can understand that builtin.Relative_ranges('mago2', 'END', 'DATETIME_AS_DATE')
will output the same value for each row, so it can precompute it at the start of the query and consider it is a constant (thus allowing to use the index).
If it is not stable, the RDBMS will prudently recompute it for each row (thus it will probably priorize other indexes than this one that would logically be the more selective).
This is probably less of a concern, depending on the proportion of your rows having the given values.
Moreover, I would expect the date
index to be a big boost.
So I'll briefly give leads here, without expanding; you can restart here if all efforts on date
aren't enough.
But if your audit contains 20 % of 'Success'
compared to other values, an index on it will be useful (particularly as a composite index with the Cast_as()
function as first member, if you stayed with a function index).
NOT(detail IN (…)) OR detail IS NULL
filterThis one is more complex. There too, judge of the proportion of rows matching the condition: if 80 % of your rows match, no need to index.
Else:
First of all, rewriting NOT (detail IN (…))
to detail NOT IN (…)
would make it more clear (and maybe more optimizable by the RDBMS? Not sure, an explain plan
would tell).
Then I would try to make it a positive IN
: instead of excluding some detail
values, list all possible other values.
And as you have a NULL
to test too, which will prevent Oracle to index it,
you would probably test with COALESCE(detail, '-') IN ('…', '…', '-')
after having created a function index on COALESCE(detail, '-')
.
Kafka doesn't guarantee ordering when you don't specify a message key. Why?
Ordering is only guaranteed per partition because, in a consumer group with multiple instances, Kafka guarantees that each partition will be processed by a single instance from each consumer group.
When you use a message key, the messages with the same key will be produced in the same partition (you can take a look at the default producer algorithm when using a meesage key that's based on a hash calculated by a called "murmur2" algorithm)
Most likely, PyInstaller didn’t find cv2
, not that it failed to recognize it as a package. Run the ls
command in your terminal to see the files and directories in the current directory. If these don’t match your development environment, use the cd
command to navigate to the directory where your .conda
or .venv
environment is located, so that PyInstaller can find cv2
.
If you have already POJO you can use Instancio library with your POJO for data setup.
Try this:
(^|\n).*?(\/\/|SQLHELPER)
for // comments. The match either ends with SQLHELPER, either with //. Then you can omit // matches by additional check.
Sub CrearPresentacionRenacimiento()
Dim ppt As Presentation
Dim slide As slide
Dim slideIndex As Integer
' Crear una nueva presentación
Set ppt = Application.Presentations.Add
' Diapositiva 1 - Título
slideIndex = slideIndex + 1
Set slide = ppt.Slides.Add(slideIndex, ppLayoutTitle)
slide.Shapes.Title.TextFrame.TextRange.Text = "Artistas del Renacimiento"
slide.Shapes.Placeholders(2).TextFrame.TextRange.Text = "Pintores, Arquitectos y Escultores"
' Diapositiva 2 - Introducción
slideIndex = slideIndex + 1
Set slide = ppt.Slides.Add(slideIndex, ppLayoutText)
slide.Shapes.Title.TextFrame.TextRange.Text = "¿Qué fue el Renacimiento?"
slide.Shapes.Placeholders(2).TextFrame.TextRange.Text = _
"El Renacimiento fue un movimiento cultural nacido en Italia entre los siglos XIV y XVI. " & _
"Se caracterizó por el regreso a los valores clásicos, el humanismo, y un gran florecimiento de las artes."
' Diapositiva 3 - Pintores
slideIndex = slideIndex + 1
Set slide = ppt.Slides.Add(slideIndex, ppLayoutText)
slide.Shapes.Title.TextFrame.TextRange.Text = "Pintores del Renacimiento"
slide.Shapes.Placeholders(2).TextFrame.TextRange.Text = _
"• Leonardo da Vinci – La Última Cena, La Gioconda" & vbCrLf & _
"• Rafael Sanzio – La Escuela de Atenas" & vbCrLf & _
"• Sandro Botticelli – El nacimiento de Venus" & vbCrLf & _
"• Miguel Ángel – Frescos de la Capilla Sixtina"
' Diapositiva 4 - Arquitectos
slideIndex = slideIndex + 1
Set slide = ppt.Slides.Add(slideIndex, ppLayoutText)
slide.Shapes.Title.TextFrame.TextRange.Text = "Arquitectos del Renacimiento"
slide.Shapes.Placeholders(2).TextFrame.TextRange.Text = _
"• Filippo Brunelleschi – Cúpula de Florencia" & vbCrLf & _
"• Leon Battista Alberti – Santa Maria Novella" & vbCrLf & _
"• Andrea Palladio – Villas palladianas y tratados de arquitectura"
' Diapositiva 5 - Escultores
slideIndex = slideIndex + 1
Set slide = ppt.Slides.Add(slideIndex, ppLayoutText)
slide.Shapes.Title.TextFrame.TextRange.Text = "Escultores del Renacimiento"
slide.Shapes.Placeholders(2).TextFrame.TextRange.Text = _
"• Donatello – David (bronce), Gattamelata" & vbCrLf & _
"• Miguel Ángel – David (mármol), La Piedad" & vbCrLf & _
"• Lorenzo Ghiberti – Puertas del Paraíso (Florencia)"
' Diapositiva 6 - Conclusión
slideIndex = slideIndex + 1
Set slide = ppt.Slides.Add(slideIndex, ppLayoutText)
slide.Shapes.Title.TextFrame.TextRange.Text = "Conclusión"
slide.Shapes.Placeholders(2).TextFrame.TextRange.Text = _
"El Renacimiento fue una época de esplendor artístico y cultural. " & _
"Sus artistas sentaron las bases del arte moderno y siguen siendo fuente de inspiración hasta hoy."
MsgBox "Presentación creada con éxito.", vbInformation
End Sub
I have the same problem, a CBA Columns.Autofit never returns, stops dead in its tracks. I've tried 1 dozen combinations of Range.AutoFit (and with the EntireColumn in the Range, too). This code ran literally >10,000 times, then stopped working, so it's an Excel problem, maybe with the VBA compiler. I only made code changes to a single module devoted to a completely different area of my program. I've reorganized my code to see if the bug was squashed, but it wasn't. any ideas?
Im try to make 'downloader file' from shared link Google Drive, without API
All solutions dosent work :( Waven AI cant handle that.
Only works that code, but it download only one file (direct link to file), i cant download all files from folder.
import gdown
gdown.download("https://drive.google.com/uc?id=1PbM6k8211A4RFuBT7QEl0cHRWpgOFvLx", "file.mp3")
You can download the email as a file and then use a web tool like mailscreenshot.com to generate an image file from that email for you.
An old thread, but I catch it anyways.
While the quick demonstration runs without error in GNU Octave (no matter what version I use), it doesn't result in a proper fit and the fitting constants are increadibly large (6.9415e+21 2.4425e+11 -7.7388e+21). Any idea why that is happening?
You need to change the max time to execute sql queries check this, i'm not sure oracle database have a attribute to change this but this can be helpful.
Le serveur a rencontré une erreur interne ou erreur de configuration et n’a pas pu terminer votre demande.
Veuillez contacter l’administrateur du serveur à l’adresse suivante : [email protected] pour les informer de l’heure à laquelle cette erreur s’est produite, et les actions que vous avez effectuées juste avant cette erreur.
Plus d’informations sur cette erreur peuvent être disponibles dans le journal des erreurs du serveur.
Apache/2.4.58 (Win64) PHP/8.2.13 mod_fcgid/2.3.10-dev Server at localhost Port 80
just ask chatgpt, and tell him to explain it to you
I am working a lot with the M5Stack Family in Arduino and code(247) is displaying the degree symbol.
And yes, I have seen a lot of unanswered questions. Thank you Hayk!
PS: I have found it with same method, displaying all 255 characters :-)
This solved the issue for me:
1. Made more free space on my hard drive. I cleared out 20GB+ of free space.
2. Restarted computer.
3. Ran Delphi as administrator.
Now it's compiling and working perfectly fine.
First of al you have to make sure the mod_headers module is installed on the apache server.
In the http requests the cookies are on the header Cookie:
Cookie: name=value
Then you should use the directive:
RequestHeader add|append|edit|edit*|merge|set|setifempty|unset header [[expr=]value [replacement] [early|env=[!]varname|expr=expression]]
For more info look at the url:
https://httpd.apache.org/docs/2.4/mod/mod_headers.html
RequestHeader edit Cookie test ""
should do the job.
Try it.
if none above helped try using Block copying using FileChannel.transferFrom
You can see the following examples.
Sites with smooth page transitions + AJAX like https://dixonandmoe.com/ (uses Barba) See From Here
Usually due to insufficient resources, it is lower than the low water level. You can refer to the following inspection ideas:
1. When a single task cannot run out, it may be that the SQL task is large, resulting in insufficient resources. In this scenario, we can first analyze whether the SQL task can be split in large and small areas; if it includes large table calculations, analyze whether there is any partition design to better utilize the partition cutting capability.
2. Currently, query whether there are many tasks. If there are many concurrent executions, analyze whether the task can be arranged staggered.
3. To analyze the usage of BE memory, that is, whether the memory is released normally, whether there is a memory leak, resulting in resource shortage, you can conduct preliminary analysis in combination with memtrakcer
With Spring AI v1.0.0, it is possible to pass it as application properties.
something like this-
spring.ai.vectorstore.pgvector.table-name=<TABLE_NAME>
Fix: Update to latest version of the RAD Studio IDE
--
After updating to the latest RAD Studio 10.3. This issue doesn't happen anymore.
It seems like it might have been a bug in earlier versions of the IDE. So just update your IDE.
Error: [Errno 13] Permission denied: 'C:\\Users\\Enter_Computers\\AppData\\Roaming\\jupyter\\runtime\\jpserver-4676-open.html'
please help,
how can solve it.
I think for you need to try checking if gzip is installed, check documentation on how install VScode on your linux distro here is link "https://code.visualstudio.com/docs/setup/linux#_install-vs-code-on-linux". Did you tried doing just this line? "tar -xzvf ~/.vscode-server/bin/17baf841131aa23349f217ca7c570c76ee87b957/vscode-server.tar.gz"
'scp' used for copying to other remote machine or local machine its not recommend to download from web using scp, 'wget' used for downloading files from internet without needing browser it's main purpose of 'wget'.
you can stuck with that installation, that is works for you.
I had exactly the same issue few weeks ago! Getting Apache's "It works!" page instead of the whoami response was driving me crazy.
@HansKilian is right that it's weird - your docker ps
shows Traefik binding to port 80 correctly, so something else is intercepting the requests.
First, check what's actually using port 80:
sudo lsof -i :80
If you see Apache there, that's your problem - it's intercepting before Docker gets the request.
@YuvarajM's suggestion is solid - try moving Traefik to another port:
ports:
- "8081:80" # Change this line
- "8080:8080"
Then test: curl -H "Host: whoami.docker.localhost" http://127.0.0.1:8081
Or go with @DavidMaze's path-based routing approach (honestly, this avoids all the DNS headaches):
labels:
- "traefik.http.routers.whoami.rule=PathPrefix(`/whoami`)"
Access via: http://localhost:8081/whoami
@HansKilian mentioned checking your Docker context - this caught me once too:
docker context ls
Make sure the *
is on default
, not desktop-linux
. Sometimes WSL does weird things with contexts.
@Z4-tier's debugging tip is gold - connect directly to the container:
docker exec -it traefik-docker-rp-reverse-proxy-1 sh
# Inside container:
wget -qO- --header="Host: whoami.docker.localhost" http://localhost
Also check Traefik's dashboard at http://localhost:8080
- you should see your whoami service listed there. If it's not showing up, there's definitely a config issue.
Since you got "Move Permanently" when testing @YuvarajM's suggestion, Traefik might be trying to redirect HTTP to HTTPS. Try adding -L
to follow redirects:
curl -L -H "Host: whoami.docker.localhost" http://127.0.0.1:8081
The port change solution usually fixes both the Apache conflict and WSL networking issues. Let me know which approach works!
I found the answer myself. It is not possible in the Google Cloud Console. The permission needs to be given in the [Google Workspace Admin Console](https://admin.google.com/).
Go to the Admin Roles and assign an admin to the Groups Admin:
The next screen will show an option to add a service account and use the email address of the service account (in my example above [email protected]):
Why does this happen? Shouldn't the response be set by the time the load event fires? If not, where is this documented
The XMLHttpRequestUpload: load event
is fired when the upload completes successfully. The full response is usually not received.
and what event is the correct event for getting the response?
You want to register a XMLHttpReques: load event
. This event is fired when the whole request including the response completes successfully.
Use floors_map_widgets
It is designed to build paths between svg map points, but it also has the functionality of zooming in and clicking on objects
For me, the error came from the fact that I was using both react-native-firebase and the Firebase SDK in my application.
I had previously decided to use react-native-firebase, but ended up using the Firebase SDK (so I imported auth from "firebase/auth").
To resolve the issue, I had to remove all "react-native-firebase" dependencies and then delete google-services.json
Perhaps try a full power reset:
1. Shut down the laptop completely.
2. Unplug the power adapter.
3. Remove the battery (if possible).
4. Press and hold the power button for 15–30 seconds (drains all residual power).
5. Reconnect power and battery.
6. Boot up—Windows should detect the Bluetooth adapter again.
This works because the Bluetooth/Wi‑Fi combo chip is on a shared module and may lose USB power state. Draining the capacitors resets the Embedded Controller and restores the device. Works reliably on ThinkPads (T440/T450/P5
0, etc.).
To migrate from AWS Managed Blockchain (Hyperledger Fabric) to a self-managed setup or another cloud provider like Ucartz, start by exporting critical artifacts. Use the AWS CLI or SDK to retrieve the ledger data via the GetBlock
and GetLedger
APIs. Manually back up the genesis block, MSP certificates, admin credentials, and channel configurations. Store peer and orderer certificates securely. Recreate the network on the target setup using these backups. For chaincode, ensure you have source code and metadata. While AWS doesn’t provide a full one-click export, methodical backup of each component ensures a smooth transition to a self-hosted environment.
From the documentation here, it looks like you'll have to change postgresql.conf
file from:
log_statement = 'all'
to:
log_statement = 'none'
You can use Github Pages for React FE, Render for express api and NeonDB for your database, all of which are absolutely free.
I found a way to create a virtual network interface (like veth in linux):
ifconfig feth0 create
ifconfig feth1 create
ifconfig feth0 peer feth1
Then you can add it to the existed bridge:
ifconfig bridge1 addm feth0
All command must run with root (sudo).
cant get this setup-bankid4keycloak-running on railways---status 502 when trying to open the admin consol-----tried differnt settings --dev --opt. --start---and diff. sets of variables---no db shows---minimal dockerfile--- would appreciate som guidance......Hilsen Mike
FROM quay.io/keycloak/keycloak:26.0.4 AS builder
USER root
#providers/bankid4keycloak-26.0.0-SNAPSHOT.jar
COPY providers/bankid4keycloak-*.jar /opt/keycloak/providers/
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:26.0.4
USER root
COPY --from=builder /opt/keycloak/ /opt/keycloak/
EXPOSE 8080
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
CMD ["start-dev"]
The best I could do was read through Mapbox's examples of offline map management and notice their use of the term "size" to reference bytes. That was enough for me to get enough certainty to report to the user the memory usage of their map from the "completedResourceSize" field's value.
Here is one of their guides that uses size to refer to the disk size of a map in bytes
Not checking for github.copilot.chat.editor.enableLineTrigger
does it for me.
allprojects {
repositories {
google()
mavenCentral()
}
configurations.all {
resolutionStrategy {
force 'androidx.core:core:1.13.1' // or latest matching version
}
}
} , Add this your android/build.gradle
I realise that this is an old question but I recently had a related issue that might be helpful to others. There may be circumstances where it is desirable to reset the auto_increment value but probably never necessary to do so. I have a form script that creates a new record whenever the script is called because the (auto_incremented) id number is needed for the process to proceed. If the user abandons the new record creation then the record is deleted which in most circumstances is fine. However, this leaves gaps in the id numbers and my client wants them to be contiguous. The solution for me was to reset the auto_increment number after deleting but I needed to be sure that no other user had created another record during this process. The solution was to use
$db->query("ALTER TABLE table_name AUTO_INCREMENT = 1");
This will reset the auto_increment number to the next available number so no problem if another user has created a new record meanwhile (unlikely in this case) except it will still leave a 'gap' in the numbers if a previous id number is then deleted. As the record was deleted anyway there was no implication regarding links to other tables etc. My client was happy with this possibility so that was the solution I used.
A simple way to clone an existing environment is to create a new environment.
#Export your active environment to a new file
conda env export > environment.yml
# Create new environment from file with newEnvironmentName environment name
conda env create --name newEnvironmentName --file=environment.yml
# OR if you want to create with the same environment name with environment.yml file
conda env create -f environment.yml
// App.jsx import React from "react"; import { BrowserRouter as Router, Routes, Route, Link } from "react-router-dom"; import Home from "./pages/Home"; import Product from "./pages/Product"; import Cart from "./pages/Cart";
export default function App() { return (
Home Cart
<Route path="/" element={} /> <Route path="/product/:id" element={} /> <Route path="/cart" element={} />
); }
It looks like Spring Boot has a specific API for implementing streaming responses. Maybe this is what you should be using.
https://dzone.com/articles/streaming-data-with-spring-boot-restful-web-servic-1
You can do:
x = 10
println("$(typeof(x)) $x")
or just:
@show x
which prints:
x = 10
(showing both the name, value, and from the REPL you'll still see the type if you want with typeof(x)
)
If you want type + value in one string:
println("$(typeof(x)) $(x)")
Example output:
Int64 10
I need to reliably detect if an iOS device has been rebooted since the app was last launched. The key challenge is to differentiate a genuine reboot from a situation where the user has simply changed the system time manually.
Initial Flawed Approaches:
Using KERN_BOOTTIME
: A common approach is to use sysctl
to get the kernel boot time.
// Fetches the calculated boot time
func currentBootTime() -> Date {
var mib = [CTL_KERN, KERN_BOOTTIME]
var bootTime = timeval()
var size = MemoryLayout<timeval>.size
sysctl(&mib, UInt32(mib.count), &bootTime, &size, nil, 0)
return Date(timeIntervalSince1970: TimeInterval(bootTime.tv_sec))
}
The problem is that this value is not a fixed timestamp. It's calculated by the OS as wallClockTime - systemUptime
. If a user manually changes the clock, the returned Date
will also shift, leading to a false positive.
Using systemUptime
: Another approach is to check the system's monotonic uptime via ProcessInfo.processInfo.systemUptime
or clock_gettime
. If the new uptime is less than the last saved uptime, it must have been a reboot.
The problem here is the "reboot and wait" scenario. A user could reboot the device and wait long enough for the new uptime to surpass the previously saved value, leading to a false negative.
The Core Challenge:
How can we create a solution that correctly identifies a true reboot and is immune to all edge cases, including:
Manual time changes (both forward and backward).
The "reboot and wait" scenario.
After extensive testing, the most robust solution is to correlate three pieces of information on every app launch:
System Uptime: A monotonic clock that only resets on reboot.
Wall-Clock Time: The user-visible time (Date()
).
Calculated Boot Time: The value from KERN_BOOTTIME
.
The OS maintains a fundamental mathematical relationship between these three values:
elapsedBootTime ≈ elapsedWallTime - elapsedUptime
If this equation holds true between two app launches, it means we are in the same boot session. Any change in the reported boot time is simply a result of a manual clock adjustment.
If this equation is broken, it can only mean that a new boot session has started, and the underlying uptime and boot time values have been reset independently of the wall clock. This is a genuine reboot.
Here is a complete, self-contained class that implements this logic. It correctly handles all known edge cases.
import Foundation
import Darwin
/// A robust utility to definitively detect if a device has been rebooted,
/// differentiating a genuine reboot from a manual clock change.
public final class RebootDetector {
// MARK: - UserDefaults Keys
private static let savedUptimeKey = "reboot_detector_saved_uptime"
private static let savedBootTimeKey = "reboot_detector_saved_boot_time"
private static let savedWallTimeKey = "reboot_detector_saved_wall_time"
/// Contains information about the boot state analysis.
public struct BootAnalysisResult {
/// True if a genuine reboot was detected.
let didReboot: Bool
/// The boot time calculated during this session.
let bootTime: Date
/// A human-readable string explaining the reason for the result.
let reason: String
}
/// Checks if the device has genuinely rebooted since the last time this function was called.
///
/// This method is immune to manual time changes and the "reboot and wait" edge case.
///
/// - Returns: A `BootAnalysisResult` object with the result and diagnostics.
public static func checkForGenuineReboot() -> BootAnalysisResult {
// 1. Get current system state
let newUptime = self.getSystemUptime()
let newBootTime = self.getKernelBootTime()
let newWallTime = Date()
// 2. Retrieve previous state from UserDefaults
let savedUptime = UserDefaults.standard.double(forKey: savedUptimeKey)
let savedBootTimeInterval = UserDefaults.standard.double(forKey: savedBootTimeKey)
let savedWallTimeInterval = UserDefaults.standard.double(forKey: savedWallTimeKey)
// 3. Persist the new state for the next launch
UserDefaults.standard.set(newUptime, forKey: savedUptimeKey)
UserDefaults.standard.set(newBootTime.timeIntervalSince1970, forKey: savedBootTimeKey)
UserDefaults.standard.set(newWallTime.timeIntervalSince1970, forKey: savedWallTimeKey)
// --- Analysis Logic ---
// On first launch, there's no previous state to compare with.
if savedUptime == 0 {
return BootAnalysisResult(didReboot: true, bootTime: newBootTime, reason: "First launch detected.")
}
// Primary Check: If uptime has reset, it's always a genuine reboot. This is the simplest case.
if newUptime < savedUptime {
return BootAnalysisResult(didReboot: true, bootTime: newBootTime, reason: "Genuine Reboot: System uptime was reset.")
}
// At this point, newUptime >= savedUptime. This could be a normal launch,
// a manual time change, or the "reboot and wait" edge case.
let savedWallTime = Date(timeIntervalSince1970: savedWallTimeInterval)
let savedBootTime = Date(timeIntervalSince1970: savedBootTimeInterval)
let elapsedUptime = newUptime - savedUptime
let elapsedWallTime = newWallTime.timeIntervalSince(savedWallTime)
let elapsedBootTime = newBootTime.timeIntervalSince(savedBootTime)
// The Core Formula Check: Does the math add up?
// We expect: elapsedBootTime ≈ elapsedWallTime - elapsedUptime
let expectedElapsedBootTime = elapsedWallTime - elapsedUptime
// Allow a small tolerance (e.g., 5 seconds) for minor system call inaccuracies.
if abs(elapsedBootTime - expectedElapsedBootTime) < 5.0 {
// The mathematical relationship holds. This means we are in the SAME boot session.
// It's either a normal launch or a manual time change. Both are "not a reboot".
// We can even differentiate them for more detailed logging.
if abs(elapsedWallTime - elapsedUptime) < 5.0 {
return BootAnalysisResult(didReboot: false, bootTime: newBootTime, reason: "No Reboot: Time continuity maintained.")
} else {
return BootAnalysisResult(didReboot: false, bootTime: newBootTime, reason: "No Reboot: Manual time change detected.")
}
} else {
// The mathematical relationship is broken.
// This can only happen if a new boot session has started, invalidating our saved values.
// This correctly catches the "reboot and wait" scenario.
return BootAnalysisResult(didReboot: true, bootTime: newBootTime, reason: "Genuine Reboot: Time continuity broken.")
}
}
// MARK: - Helper Functions
/// Fetches monotonic system uptime, which is not affected by clock changes.
private static func getSystemUptime() -> TimeInterval {
var ts = timespec()
// CLOCK_MONOTONIC is the correct choice for iOS as it includes sleep time.
guard clock_gettime(CLOCK_MONOTONIC, &ts) == 0 else {
// Provide a fallback for safety, though clock_gettime should not fail.
return ProcessInfo.processInfo.systemUptime
}
return TimeInterval(ts.tv_sec) + TimeInterval(ts.tv_nsec) / 1_000_000_000
}
/// Fetches the calculated boot time from the kernel.
private static func getKernelBootTime() -> Date {
var mib = [CTL_KERN, KERN_BOOTTIME]
var bootTime = timeval()
var size = MemoryLayout<timeval>.size
guard sysctl(&mib, UInt32(mib.count), &bootTime, &size, nil, 0) == 0 else {
// In a real app, you might want to handle this error more gracefully.
fatalError("sysctl KERN_BOOTTIME failed; errno: \(errno)")
}
return Date(timeIntervalSince1970: TimeInterval(bootTime.tv_sec))
}
}
Call the function early in your app's lifecycle, for example in your AppDelegate
:
import UIKit
@main
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
let result = RebootDetector.checkForGenuineReboot()
if result.didReboot {
print("✅ A genuine reboot was detected!")
} else {
print("❌ No reboot occurred since the last launch.")
}
print(" Reason: \(result.reason)")
print(" Current session boot time: \(result.bootTime)")
// Your other setup code...
return true
}
}
We do not really consider the constants in time complexity. It does not matter if there is some constant being multiplied with time complexity or being added/subtraced/divided.
O(n+k)/O(kn)/O(n-k)/O(n/k), they all are same. Because they do not effect the time complexity much. However, if the time-complexity is effected exponentially. For example, in
O(log n)
then that does matter. I hope it helps
This happens when GitLab's default GIT_CLEAN_FLAGS
includes -ffdx
. Override this:
variables:
UV_CACHE_DIR: .uv-cache
GIT_STRATEGY: fetch
GIT_CLEAN_FLAGS: none
This will preserve untracked files like .uv-cache/
between pipeline runs.
I also encountered this issue. I tried all of the methods provided here, but nothing worked. The packages were being installed under Python 3.13, but my VS Code interpreter was set to Python 3.12. Once I changed my interpreter to Python 3.13, everything worked.
To change the interpreter, press Ctrl+Shift+P
in VS Code and type Python: Select Interpreter
, then select the Python 3.13 interpreter.
Wild guess ... If you are using SQLLite or any JDBC provider that embeds their own DLL's/.so's in the jar and exports to $TMP - That might be an issue of $TMP is set as noexec.
When the .so/.dll can't load ... It may manifest as a classnotfound since the class didn't initialize we can cause a chain reaction to other classes not loading.
https://medium.com/@paul.pietzko/trust-self-signed-certificates-5a79d409da9b
this is the best solution I could find for this issue
After an exhaustive debugging process, I have found the solution. The problem was not with the Julia installation, the network, the antivirus, or the package registry itself, but with a corrupted Manifest.toml file inside my project folder.
The error ERROR: expected package ... to be registered was a symptom, not the cause. Here is the sequence of events that led to the unsolvable loop:
My very first attempt to run Pkg.instantiate() failed. This might have been due to a temporary network issue or the initial registry clone failing.
This initial failure left behind a half-written, corrupted Manifest.toml file. This file is the project's detailed "lock file" of all package versions.
Crucially, this corrupted manifest contained a "memory" of the package it first failed on (in my case, Arrow.jl).
From that point on, every subsequent Pkg command (instantiate, up, add CSV, etc.) would first read this broken Manifest.toml. It would see the "stuck" entry for Arrow and immediately try to resolve it before doing anything else, causing it to fail with the exact same error every single time.
This explains the "impossible" behavior where typing add CSV would result in an error about Arrow. The package manager was always being forced to re-live the original failure because of the corrupted manifest.
Wasted days on this issue too.
See why the problem exists here:
https://github.com/nxp-imx/meta-imx/blob/styhead-6.12.3-1.0.0/meta-imx-bsp/recipes-kernel/linux/linux-imx_6.12.bb#L56-L58
Resolution here:
https://community.nxp.com/t5/i-MX-Processors/porting-guide-errors/m-p/1578030/highlight/true#M199614
I am beginning with Jupyter lab use and I have similar issue running on Windows (please see below).
May someone explain what does it means and how to fix it ?
C:\Users\paulb>jupyter lab
Fail to get yarn configuration. C:\Users\paulb\AppData\Local\Programs\Python\Python313\Lib\site-packages\jupyterlab\staging\yarn.js:4
(()=>{var Qge=Object.create;var AS=Object.defineProperty;var bge=Object.getOwnPropertyDescriptor;var Sge=Object.getOwnPropertyNames;var vge=Object.getPrototypeOf,xge=Object.prototype.hasOwnProperty;var J=(r=>typeof require<"u"?require:typeof Proxy<"u"?new Proxy(r,{get:(e,t)=>(typeof require<"u"?require:e)[t]}):r)(function(r){if(typeof require<"u")return require.apply(this,arguments);throw new Error('Dynamic require of "'+r+'" is not supported')});var Pge=(r,e)=>()=>(r&&(e=r(r=0)),e);var w=(r,e)=>()=>(e||r((e={exports:{}}).exports,e),e.exports),ut=(r,e)=>{for(var t in e)AS(r,t,{get:e[t],enumerable:!0})},Dge=(r,e,t,i)=>{if(e&&typeof e=="object"||typeof e=="function")for(let n of Sge(e))!xge.call(r,n)&&n!==t&&AS(r,n,{get:()=>e[n],enumerable:!(i=bge(e,n))||i.enumerable});return r};var Pe=(r,e,t)=>(t=r!=null?Qge(vge(r)):{},Dge(e||!r||!r.__esModule?AS(t,"default",{value:r,enumerable:!0}):t,r));var QK=w((GXe,BK)=>
SyntaxError: Unexpected token {
at createScript (vm.js:56:10)
at Object.runInThisContext (vm.js:97:10)
at Module._compile (module.js:549:28)
at Object.Module._extensions..js (module.js:586:10)
at Module.load (module.js:494:32)
at tryModuleLoad (module.js:453:12)
at Function.Module._load (module.js:445:3)
at Module.runMain (module.js:611:10)
at run (bootstrap_node.js:387:7)
at startup (bootstrap_node.js:153:9)
[W 2025-06-15 13:54:15.948 LabApp] Could not determine jupyterlab build status without nodejs
If you are using Github actions for deployment then you should checkout this
https://github.com/marketplace/actions/git-restore-mtime
This action step restores the timestamp very well and post that S3 sync will only upload the last updated file and not the entire directory
I was reading the 2024 spec version, which had a known inconsistency since 2022. It's been changed again to say that that ToPropertyKey
is delayed on the a[b] = c
construction; in reality, it is delayed on various other update expressions too. This '''specification''' is such a joke.
For anyone who finds this and still looking for help, based on the above and following this documentation from Cypress and had to customize it a bit for Task
https://docs.cypress.io/app/tooling/typescript-support#Types-for-Custom-Commands
Ended up with a cypress.d.ts
file in my root with the following to dynamically set response types for the specific custom task name and not override all of "task".
declare global {
namespace Cypress {
interface Chainable {
task<E extends string>(
event: E,
...args: any[]
): Chainable<
E extends 'customTaskName'
? CustomResponse
: // add more event types here as needed
unknown
>
}
}
}
Probably a chance for a cleaner approach if you have a large number of custom tasks, maybe a value map or something of the like. For now i am moving on cause way too much time wasted on this.
I had the same issue with windows (11) security blocking python from writing files inside my OneDrive Documents folder. Had to override the setting.
Alternatively in modern Excel, you can keep the VBA function as is and rely on Excel function MAP:
=SUM(1 * (MAP(AC3:AD3; LAMBDA(MyCell; GetFillColor(MyCell))) = 15))
When the key is null the default partitioner will be used. This means that, as you noted, the message will be sent to one of the available partitions at random. A round-robin algorithm will be used in order to balance the messages among the partitions.
After Kafka 2.4, the round-robin algorithm in the default partitioner is sticky - this means it will fill a batch of messages for a single partition before going onto the next one.
Of course, you can specify a valid partition when producing the message and it will be respected.
Ordering will not differ - messages will get appended to the log in the same order by their arrival time regardless if they have a key or not.
Thankyou for the help. I want to change the comment border on the workspace, because when choosing the recommended settings from dart I feel the comment border takes up too much space.
Temporarily return 'Text.From(daysDiffSPLY)' instead of the first null and you'll understand: You are comparing daysDiffSPLY and not daysDiffTY so you should compare it to [0, 6], [7, 34], ... instead of [365, 371], [372, 399], etc.
Take a look at this repository: https://github.com/VinamraVij/react-native-audio-wave-recording. I've implemented a method for recording audio with a waveform display and added animations that sync with the waveform and audio during recording. You may need to adjust the pitch values to improve the waveform visualization, as the pitch settings vary between Android and iOS.
Checkout this video demo https://www.youtube.com/watch?v=P3E_8gZ27MU
Suggestion to reinstall SELinux, for me it's always PERMISSIVE by default.
If you reboot and after booting in in config says it's DISABLED that means the system itself stops this mode from doing this try: chown $USER:$USER /etc/selinux/config
, and if that does not help try: chmod +x /etc/selinux/config
.
Formatnumber is to do the opposite i.e. number to string,
The best would be FINDDECIMAL which will convert the first occurring numeric to number from the string field
If you’ve been exploring the world of crypto lately, you’ve probably seen people talk about NXRA crypto. But what exactly is it—and why does it matter?
In this friendly, step-by-step guide, we’ll explore what NXRA is, how it works, where it fits in the future of finance, and why people are investing in it right now. Whether you’re a total beginner or a seasoned crypto fan, this article will help you understand the real value behind NXRA crypto—in simple terms.
here is a simple option, just add these two lines to your CSS
details > div {
border-radius: 0 0 10px 10px;
box-shadow: 3px 3px 4px gray;
}
see a working example on my test site
You're trying to use config API that does not exist. I couldn't find documentations for that section.
Solution for your case - write your custom plugin and modify gradle settings as string there. It is described here https://github.com/expo/eas-cli/issues/2743
Modifying privacy settings in section "Global" applies to the current Windows session (and thus requires that every guy using this file applies the same setting). I would suggest to keep "Combine data according to each file's Privacy level settings" here.
If data handled by this one file are purely internal, then you can go to privacy settings in section "Current workbook" and select "Ignore the Privacy levels...". This will apply to all its users provided that they kept the "global" setting mentioned here above.
This is safer as you might have some other files using the web connector (now or in the future).
Now if your "PartNumber" comes from an Excel range, you could right click on its query and create a function "GetPartNumber" (without any input parameter). Then use "GetPartNumber()" instead of PartNumber in your query step "Query"; the firewall should not be triggered.
Just got the same error on Visual Studio 2022 using PowerShell Terminal. Fixed by switching the terminal from "Developer PowerShell" to "Developer Command Prompt".
I just found this. Thank you for your explanation.
$host_name = 'db5005797255.hosting-data.io';
$database = 'dbs4868780';
$user_name = 'xxxxxxxxx';
$password = 'xxxxxxxxxxxxxxxxxxxx';
$link = new mysqli($host_name, $user_name, $password, $database);
if ($link->connect_error) {
die('\<p\>Failed to connect to MySQL: '. $link-\>connect_error .'\</p\>');
} else {
echo '\<p\>Connection to MySQL server successfully established.\</p\>';
}
?>
Just finished programming related to C++ and SFML today..Maybe you can try CmakeLists and some configuration files😝
I my case it was because of goAsync(). If you take resultCode before goAsync() call it contains RESULT_OK. But if you take if after goAsync() call it contains 0
There was nothing wrong with perspective projection matrix. There was small issue in clipping algorithm. z-near should be zero because I was using vulkan's canonical view volume.
An other issue was that P2.x > P2.w & P2.x < -P2.w
wasn't impossible because viewing frustum is inverted when z < 0. So I just needed to clip from near plane first and the from other planes.
num = 1234
return [ int(x) for x in str(num) ]
Convert num
to str
, iterates and converts each int
to str
and adds to a list.
I my case, it was because I failed to set the correct value in the.plist file for each flavor or environment. I accidentally set the value to the project Info.plist intead of the OneSignalNotificationServiceExtension/Info.plist.
We're having the same issue on Xcode 26 beta 1.
It is hidden on Xcode 16.4 but not on 26 beta 1. I did not see any changes in the API in the new update So I think this is a bug related to the OS or Xcode.
I'm submitting a bug for Apple about it.
I would go for outlining the polygon with line-to's first, then fill them with pixels, then check if my point is inside them. I know that it may go slower than the algorithm that suppose to run but however I prefer being comfortable with my code when dealing with such problems. To be honest, that algorithm is not something like an idea that comes very quick then implement very clearly.
The line-to is here and fill function is here. Fill will not work with a concave polygon. It may need some update.
I am also experiencing the same issue, and looking for someone to resolve my issue.
Vertically centered and horizontally centered
.parent div {
display: flex;
height: 300px;
width: 100px;
background-color: gainsboro;
align-items: center;
justify-content: center:
}
To vertically center the text inside the div's you need to give display: flex
and align-items: center
to .parent div
this will make their text vertically center, you can also give justify-content: center
to horizontally center them.
You can check if the e.HasMorePages is true and get all the pages from an array and print it. Something like this.
if(e.HasMorePages)
{
for(int i =0; i < PagesArray.Length; i++)
{
YouPrintMethod(PagesArray[i]);
}
}
Hope my tip can help you.
Possible Causes
1. File Path or Name Issue: Ensure the file path and name match the item registry name (`chemistrycraft:items\bottle_of_air.json`).
2. JSON Syntax Error: Verify the JSON syntax is correct (yours appears to be).
3. Missing Model Key: Although your file structure looks standard for item models, some model types might require a "model" key. Consider checking Minecraft Forge documentation or examples.
I install node.js 16 and other required packages but I don't know what to do with package manager
With the StringContent
we have to read it
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
public async Task ReadStringContentAsync()
{
string contentString = await theStringContent.ReadAsStringAsync();
//do something with the string
}
Same problem after long long time, I got exact same opinion with you I could not deal with child_pricess and all of the other packages, it is so frustrating. Now I want to use c++/py to print labels for product. But there is another way, if you are using electron then you can print the window, use mvc to create pop-up window and use window.print() to print this window, using by usb001 port, not file print.
I had this table doesn't exist error. It went away when I reran after quitting SQLiteStudio. I suspect the table can't be created if the .db file is open.
What you describe is called a JSON schema.
For example the JSON schema for the following JSON:
{
"first" : "Fred",
"last" : "Flintstone"
}
Would be something like this:
{
"type": "object",
"properties": {
"first": { "type": "string" },
"last": { "type": "string" },
}
}
You can then use the jsonschema
package for validation:
from jsonschema import validate
validate(
instance=json_to_validate, schema=json_schema,
)
<div class="youtube-subscribe">
<div class="g-ytsubscribe"
data-channelid="UCRzqMVswFRPwYUJb33-K88A"
data-layout="full"
data-count="default"
data-theme="default"\>
</div>
</div>
<script src="https://apis.google.com/js/platform.js"></script>
use awsCredentials
inside of your inputs with your service connection name to access the credentials
I was able to solve the issue by adding an account in the Xcode settings under "accounts".
In the signing and capabilities menu, it looked like I was under my personal developer account (which looked correct) instead of my work account. It said My Name (Personal Team). Then when I added my personal developer account in the settings, it showed up as another item in the team dropdown but without "Personal Team".
It then worked because it was finally pulling the certs using the correct team id.
It can be caused by your active VPN session. Just disconnect your VPN and try again.
It's because you create a specified function for one object only.
To improve your work you can create a constructor then reuse the constructor's code to applies it to a specific object.
To understand better it is advisable to look into the dependency tree of the pom. It will show the included transitive dependencies which got pulled in by the dependencies declared . So , with that we can understand why this results in any conflicts. For example, I had jackson-core(2.19.0) and then added jackson-binding(2.19.0). It started showing conflict saying issue with jackson-binding 2.19.0 having conflict with 2.18.3 . But I had no where jackson-binding 2.18.3 . So , when I looked at the dependency tree, I saw jackson-binding 2.19.0 was including transitive dependency jackson-core 2.18.3 . Hence, it resulted in conflict . Hope it helps to understand . P.S. the transitive dependencies can be excluded or we can tell our ide which version to be effective.
I have the exact same problem, and in my case npx tsx script
works but the IDE Typescript service throws the above error. I gave up trying to solve this, I don't think it's worth the time. Instead, a simple built-in alternative in JS is:
let arr = [1, 2, 3];
Math.max(...arr);
Have you find any solution for it. I'm getting same error and verified everything?
You need to keep moving the player with velocity, but also call MovePosition on top of it if the player is on the platform. And MovePosition will only receive the delta of the platform, while the user inputted movement will still go into velocity.
For Xcode 16.4 use this AppleScript:
tell application "Xcode"
activate
set targetProject to active workspace document
build targetProject
run targetProject
end tell
Thank you for your answer, and I appreciate it.