In Lucene, scores can differ between partitions even if the document fields match exactly. The reason is that Lucene’s scoring depends not only on term frequency (TF) and inverse document frequency (IDF) within each partition but also on how big and diverse the index is.
For example:
IDF differences → If “John” or “Smith” appears more frequently in Partition 0 than in Partition 1, Lucene will assign a slightly different IDF value, which changes the score.
Normalization → Field length normalization and document norms may differ across indexes, which affects scoring.
Independent statistics → Each partition is scored in isolation, so two identical matches won’t necessarily get the same numeric score.
If you want consistent scores across partitions, you’d need to:
Use a MultiSearcher / IndexReader that merges statistics across indexes.
Or normalize the scores manually after retrieval if you’re always querying partitions separately.
The important part is: scores are relative within a single index, not absolute across different ones. As long as the top matches are identical, the small score difference doesn’t usually matter.
By the way, I recently wrote about how Lucene scoring concepts feel similar to how different worlds are weighted in Anime Witcher — worth checking if you like technical + fantasy blends.
The neumorphic flutter plugin flutter_neumorphic is discontinued, instead you can use gusto_neumorphic or flutter_neumorphic_plus
The issue is with bar: dict as it seems to be a too generic type.
What worked for bar are:
from typing import Literal
bar: dict[Literal["bar"], str] = {"bar": "bar"}
# or
class BarDict(TypedDict):
bar: str
bar: BarDict = {"bar": "bar"}
This means, that it's some sort of an "all in" situation if this way of typing wants to be used.
Cool new things are coming in this area, for example have a look at extra items https://peps.python.org/pep-0728/#the-extra-items-class-parameter
it's late but. Yes. You can use i18nGuard — an i18n‑aware linter for JS/TS that flags hard‑coded strings and also checks missing/unused keys across i18next, React‑Intl (FormatJS), and Lingui. I wrote a short post with setup and examples here: Stop shipping hard‑coded strings: Meet i18nGuard — an i18n linter for JS/TS (i18next, React‑Intl, Lingui) (https://dev.to/rmi_b83569184f2a7c0522ad/stop-shipping-hard-coded-strings-meet-i18nguard-an-i18n-linter-for-jsts-i18next-react-intl-4m8a).
I prefer to use many-to-one to prevent recursive and relational fetching issue. You don't need to manage fetch type... It's less effort and less error... and less Frustate...
@Entity
@Data
public class Content {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long contentId;
private String contentName;
@Column(columnDefinition = "TEXT")
private String synopsis;
@ManyToOne
@JoinColumn(name = "content_type_id")
private ContentType contentType;
@ManyToOne
@JoinColumn(name = "country_code")
private Country countryCode;
private String portraitUrl;
private String landscapeUrl;
private Boolean featured;
@Enumerated(EnumType.STRING)
private ContentApprovalStatus approvalStatus;
@CreationTimestamp
private LocalDateTime createTime;
@UpdateTimestamp
private LocalDateTime updateTime;
@Entity
@Data
public class ContentGenre {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long contentId;
@ManyToOne
@JoinColumn(name = "content_id")
private Content content;
@ManyToOne
@JoinColumn(name = "genre_id")
private Genre genre;
}
@Entity
@Data
public class ContentLanguage {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long contentId;
@ManyToOne
@JoinColumn(name = "content_id")
private Content content;
@ManyToOne
@JoinColumn(name = "language_id")
private Language language;
}
You don't need @OneToMany contentCrews inside Content Entity, you have Content and Crew on ContentCrew, so delete it
@OneToMany(mappedBy = "content", cascade = CascadeType.ALL, orphanRemoval = true)
private Set<ContentCrew> contentCrews;
You dont need @OneToMany contentCrew inside Crew Entity, you have ContentCrew table to handle it.
@OneToMany(mappedBy = "crew", cascade = CascadeType.ALL, orphanRemoval = true)
private Set<ContentCrew> contentCrews;
So, you just need ContentCrews to manage content and crews. If you want to get content by crew or get crew by content, jut use ContentCrew table and delete fetchType.LAZY.
public class ContentCrew {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@JoinColumn(name = "content_id")
private Content content;
@JoinColumn(name = "crew_id")
private Crew crew;
private String role;
}
This is the easiest manage entity for you...
the warning errors that you are observing in the logs are related to the JFConnect microservice in the Artifactory. The JFConnect is trying to communicate with the https://jes.jfrog.io/api/v1/register, and it is returning connection refused. Since you are using a proxy, you may specify the proxy configuration as below :
For Helm -
jfconnect:
# Configure only if you use an Artifactory version before 7.77.
extraEnvironmentVariables:
- name: http_proxy
value: http://<proxy URL>/
- name: https_proxy
value: http://<proxy URL>/
- name: no_proxy
value: localhost,127.0.0.1
other installation types - add the following configuration in system.yaml file -
jfconnect:
# Configure only if you use an Artifactory version before 7.77.
env:
http_proxy: "http://<proxy URL>"
https_proxy: "http://<proxy URL>"
no_proxy: “localhost, 127.0.0.1”
check this article for more information ! https://jfrog.com/help/r/jfrog-installation-setup-documentation/jfconnect-microservice
Found what was wrong :
in the primary request for obtaining a token, header returned was in lower-case : x-subject-token but for deleting it, it must be 'word capitalized' : X-Subject-Token
Try running wp media regenerate or using plugin that does the same.
How about google ceres, I use it to replace my scipy "l_bfgs_b", it is fine.
But my prolem is pretty simple, not sure about complicated problems.
@Frank thanks you are right. when I rewrite body of udf whole example looks like:
import org.apache.spark.sql.functions.{udf, struct}
val reduceItems = (items: Row) => {
// getting array of struct from struct
val a = items.getAs[Seq[Row]]("second")
// summing struct item in array
a.map(_.getAs[Int]("navs")).reduce(_ + _)
}
val reduceItemsUdf = udf(reduceItems)
// passing struct of array of struct
h.select(reduceItemsUdf(struct("*")).as("r")).show()
and it works in spark 4, but I still do not know where was problem, why parameter can't be Seq?
In my case, Android Studio (Narwhal 3 Feature Drop | 2025.1.3) showed no warnings.
However, an older revision of the API Level 36 emulator did.
After updating to revision 7, the 16 kB alignment warning disappeared.
I finally found the way using chilkat2, thanks to some now deleted comment that pointed me to this example code.
Apparently, chilkat can use whatever .dll you choose to manage card readers and make operations with them, such as listing certificates and even using them. Also has its own pdf signing module. Really powerful.
Anyways, this is the code I ended up using:
# Standard libraries
import sys
# Third-party
import tkinter as tk
import customtkinter as ctk
import chilkat2
'''
PDF Digital signature process
'''
# Dialog asking for PIN
root = tk.Tk()
root.withdraw()
dialog = ctk.CTkInputDialog(title="pin", text="Introduce el PIN de tu tarjeta:")
pin = dialog.get_input()
root.destroy()
# Initialize chilkat2 pkcs11 from Nexus Personal's dll
pkcs11 = chilkat2.Pkcs11()
pkcs11.SharedLibPath = r"C:\Program Files (x86)\Personal\bin64\personal64.dll"
success = pkcs11.Initialize()
if not success:
print(pkcs11.LastErrorText)
sys.exit()
userType = 1 # Normal User
slotId = 0 # This is arbitrary and pin-pointed by me
readWrite = True
success = pkcs11.OpenSession(slotId, readWrite)
if not success:
print(pkcs11.LastErrorText)
sys.exit()
# Login
success = pkcs11.Login(userType, pin)
if not success:
print(pkcs11.LastErrorText)
pkcs11.CloseSession()
sys.exit()
# Get the certificate (on the smart card) that has a private key.
cert = chilkat2.Cert()
success = pkcs11.FindCert("privateKey","",cert)
if (success == True):
print("Cert with private key: " + cert.SubjectCN)
else:
print("No certificates having a private key were found.")
success = pkcs11.CloseSession()
sys.exit()
pdf = chilkat2.Pdf()
# Load the PDF to be signed.
success = pdf.LoadFile(r"template.pdf")
if (success == False):
print(pdf.LastErrorText)
success = pkcs11.CloseSession()
sys.exit()
json = chilkat2.JsonObject()
json.UpdateInt("page",1)
json.UpdateString("appearance.y","bottom")
json.UpdateString("appearance.x","right")
json.UpdateString("appearance.fontScale","10.0")
json.UpdateString("signingAlgorithm","pss")
json.UpdateString("hashAlgorithm","sha256")
i = 0
json.I = i
json.UpdateString("appearance.text[i]",f"Firmado digitalmente por: {cert.SubjectCN}")
i = i + 1
json.I = i
json.UpdateString("appearance.text[i]","current_dt")
# The certificate is internally linked to the Pkcs11 object, which is currently in an authenticated session.
success = pdf.SetSigningCert(cert)
success = pdf.SignPdf(json,r"template_signed.pdf")
if (success == False):
print(pdf.LastErrorText)
success = pkcs11.CloseSession()
sys.exit()
# Revert to an unauthenticated session by calling Logout.
success = pkcs11.Logout()
if (success == False):
print(pkcs11.LastErrorText)
success = pkcs11.CloseSession()
sys.exit()
# When finished, close the session.
success = pkcs11.CloseSession()
if (success == False):
print(pkcs11.LastErrorText)
sys.exit()
print("Success signing.")
The same Eric Evance says that DDD is not reasonable for a simple systems such as user, when creating ubiquitous language just spends time for obvious model.
Evance thinks, also, that in the case of technically complex project DDD is not effective, because lot of technical personnel should understand and learn the ubiquitous language and get domain model. Personally I completely disagree with this, because domain - infrastructure separation solves the problem, when only teams, which specialize in domain logic development must work with the ubiquitous language. Infrastructure developers ( as Kafka events, request routing,database interactions do not have to be perfect in the DDD.
Development of the ubiquitous language takes a time, as well as corresponding learning curve. For the same time a MVP can be build.
However, ubiquitous language helps reduce time for domain description and visualization in a clear and concise form, much better readable than long term demagogy .
I did add extra info why this d string is actually usefull. Please un-hide my post. Thank you.
No — you don’t need php5-mysql.
Since you’re on PHP 7, just install the matching package:
sudo apt-get install php7.0-mysql
sudo systemctl restart php7.0-fpm nginx
That will fix the error.
Got it.
owb = oxl.Workbooks.OpenXML(file,, 2)
did the trick.
https://learn.microsoft.com/en-us/office/vba/api/excel.xlxmlloadoption
You can use this to mark complete:
This can be done for the course or topic completion.
I added the following line of code to solve my problem
implementation 'com.google.android.material:material:1.7.0'
All that was required was the itext.pdfcalligraph package along with a valid license. Once loaded as shown below, Arabic text started displaying correctly. I’m surprised it isn’t mentioned anywhere that having the license is absolutely necessary in addition to the itext.pdfcalligraph package.
LicenseKey.LoadLicenseFile(licenseFile);
I created an NPM package google-maps-vector-engine to handle PBF/vector tiles on Google Maps, offering near-native performance and multiple functionalities. I recommend giving it a try.
4000000000009995 it is u can test for Insufficient funds decline
PUSH only works with addresses, not register names; so use temp EQU 00h instead of temp EQU R0.
Here is the documentation of the StackOverflow API:
Here you will find the newest API version and all the available endpoints.
You may fetch any user data with a similar request like this: https://api.stackexchange.com/2.3/users/{user-id}?site=stackoverflow
Btw, I need some \0 in the middle too.
As a c string ends with cero, I propose...
a d string ending with delete (\x7f).
:-)
Everything you set up looks correct — the only issue is that after adding the new Image Symbol Set, you need to rename the image set to "custom.viewfinder". Once it's done, you’ll be able to use your custom symbol as expected.
Attaching Assests
Sample code
It seems to be a bug. Please vote for IDEA-186221.
If you're just experimenting, you can simply use lifecycle rules to periodically clean up (for example, retaining files for 90 days).
For a production environment, we recommend:
Write a script to identify "active" S3 objects.
Only clean up historical ZIP files and templates that are no longer referenced.
Alternatively, use a custom S3 bucket and overwrite the same file name each time you build to avoid overruns.
@djv
XML opened in Excel from my code:
XML opened in Excel as XML table:
Thanks Achim for your quick and detailed reply — and apologies for including an image of the code. I tried to use the “four backticks” trick (I even attempted something similar at some point) but couldn’t get it to work.
Both of your proposed solutions work fine; the second one, based on "mytemplate.md", is clearly preferable.
There’s only one remaining issue: if I include a graph in the R code, the following is added to the latex file.
\pandocbounded{\includegraphics[keepaspectratio]{media/supplements1/exercise1/ex1-unnamed-chunk-3-1.png}}
However, pdflatex compilation fails because it cannot find the PNG file (I wasn’t able to locate where it gets saved — maybe a temporary folder?). Do you have any idea how to fix this?
By the way, my ultimate goal is to produce a PDF file with all the exercises so I can quickly scan them when preparing exams. With the code I wrote and your suggestions, I’ve almost reached that goal.
Thanks a lot for the wonderful package!
Had the same issue and inside AppShell.xaml in the <Shell> section you can add or change Title="Your Title" to change the title thats being displayed when the app runs as the other solutions didnt update that part for me
flutter build apk
./gradlew assembleRelease
强制插件代码生成: flutter build apk命令会强制Flutter工具链生成所有必要的插件代码和依赖
完整的构建流程: Flutter的构建流程会确保所有插件的native部分被正确编译和链接
依赖关系解决: 先执行Flutter构建可以解决混合开发中的依赖顺序问题
Add the relaxed constexpr flag:
CUDA suggests using the --expt-relaxed-constexpr flag. You can add this in your CMakeLists.txt before you build OpenCV. For example:
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} --expt-relaxed-constexpr")
After that, re-run CMake so it regenerates the Visual Studio solution.
Check CUDA/cuDNN versions:
You’re on CUDA 12.9 and cuDNN 8.9.7, but it’s worth checking NVIDIA’s compatibility matrix for your GPU. Sometimes, even if versions look compatible, there can still be hiccups. If nothing works, try rolling back to a slightly older CUDA release (say, 12.3) along with a matching cuDNN version—this often solves hidden issues.
Consider OpenCV version:
OpenCV 4.12.0 is a bit dated. Newer releases (like 4.9.x or even OpenCV 5) tend to have better CUDA support, especially for the latest GPUs. Upgrading both OpenCV and opencv_contrib might prevent these conflicts altogether.
Verify CUDA installation:
Make sure your CUDA Toolkit is fully installed and your environment variables are set properly (CUDA_PATH, CUDA_PATH_Vxx_x, Path on Windows, etc.). Leftovers from old CUDA installs can sometimes trip up the build.
Do a clean rebuild:
Whenever you change build flags or environment settings:
Delete your existing build folder.
Re-run CMake from scratch.
Open Visual Studio, clean the solution, and then rebuild.
Try lowering optimization:
In Visual Studio’s project settings, reduce the compiler optimization level (under C/C++ → Optimization). Sometimes aggressive optimizations cause CUDA code to fail in odd ways.
Keep an eye on Blackwell issues:
Since you’re working with the new RTX 50-series (Blackwell), there could be some growing-pains with CUDA or OpenCV that aren’t widely documented yet. Checking the NVIDIA developer forums or the OpenCV GitHub issues page might reveal others facing the same problem.
Yes , Golang code can be run using your typescript code. You can write your Golang code then use Cobra to enable it to act on custom CLI commands. then you just need to execute those commands within your typescript code.
HUGE thanks to poster ''96'' concerning the
arm64-v8a.apk
question. Super-- I was looking all over for info on this. But this fellow's info really
sets me straight. Most grateful for the info
best wishes to the poster.
fred
missouri, usa
xss-safe regex with with folders and query support
^ipfs:\/\/(Qm[1-9A-HJ-NP-Za-km-z]{44,}|b[A-Za-z2-7]{58,}|B[A-Z2-7]{58,}|z[1-9A-HJ-NP-Za-km-z]{48,}|F[0-9A-F]{50,})([/?#][-a-zA-Z0-9@:%_+.~#?&//=]*)*$
When running the Table Extraction Wizard you can define whether you want the visible value from the indicated cell or the underlying URL associated with it.
Try chosing Extract URL to get the desired outcome.
You need a php-handler on the server to receive posts. I would access it through PHP with $_POST['data'] array.
$.ajax({
url: "upload.php",
type: "POST",
data: formData
Common causes:
1. Duplicate asset declarations
example:
assets\images\background.png
assets\images\background.png
2. Case mismatch on Windows
example:
assets\images\background.png
assets\images\Background.png
3. Stable build cache
Even if your assets are correct, a leftover copy in build/ or .dart_tool/ can trigger this error.
How to Fix?
1. Verify your pubspec.yaml only includes
flutter:
assets:
- assets\images\background.png
2. Delete build/ and .dart_tool/ manually before running again.
3. Run below command
flutter clean
flutter pub get
flutter run
4. App builds successfully!!!! no more PathExistsException. Hurrey 🎉
To fix this: Change max-width to width The issue is that you are using max-width instead of width in your CSS.
The max-width property only sets a limit on how wide the image can be, but it doesn't force it to a specific size. Because you've set a fixed height of 165px, the browser is scaling the image's width to maintain its original aspect ratio, making it wider than 117px.
useNavigation() is a React hook. Hooks only work inside React components that are rendered inside a navigator. But App.tsx is not a screen, it’s your root component, so there’s no navigation context there. That’s why React Navigation throws.
I think you should use navigationRef for this case. (standard practice)
Here is code attached:
create RootNavigation.ts :
import { createNavigationContainerRef } from '@react-navigation/native';
export const navigationRef = createNavigationContainerRef();
export function navigate(name: string, params?: object) {
if (navigationRef.isReady()) {
navigationRef.navigate(name as never, params as never);
}
}
Attach ref to your NavigationContainer :
import { NavigationContainer } from '@react-navigation/native';
import { navigationRef } from './RootNavigation';
export default function App() {
return (
<NavigationContainer ref={navigationRef}>
{/* your Stack.Navigator / Tab.Navigator goes here */}
</NavigationContainer>
);
}
Use navigate in Centrifugo callback:
import { Centrifuge } from 'centrifuge';
import Toast from 'react-native-toast-message';
import { navigate } from './RootNavigation';
useEffect(() => {
const centrifuge = new Centrifuge("wss://centrifugo.xxxxx.xxxxxx/connection/websocket", {
token: "xxxxxxxxxxxxxxxxxxxx"
});
centrifuge.on('connected', ctx => {
console.log(`centrifuge connected over ${ctx.transport}`);
}).connect();
const sub = centrifuge.newSubscription("xxxxx", {
token: 'xxxxxxxxxxxxxxxxxxxx'
});
sub.on('publication', ctx => {
Toast.show({
type: "success",
text1: ctx.data['message']
});
navigate('DetailHistory', {
id: ctx.data['transaction_id']
});
}).subscribe();
return () => {
centrifuge.disconnect();
console.log('Centrifuge client disconnected on cleanup.');
};
}, []);
Hope this fixes your problem.
THANK YOU !
09-2025
let me start by saying that
IT SHOULD NOT BE THIS DIFFICULT TO SETUP A BUILD ENVIRONMENT FOR ARM64 ON AN ARM64 MACHINE!!!!
sorry... had to get that off my chest. :-)
I just purchased the new Lenovo Chromebook Plus with a MediaTek Kompanio ARM 64 cpul and 16 GB of RAM. Why did I buy this? because every attempt to build an Android app on my 8BG Intel-base Chromebook crashed half way through the build process due to lack of memory resources.... sigh
so, it made perfect sense to get an ARM64 based cpu to do my ARM64 base application development... right? wrong!!!
be, for some reason, there is no officially supported ARM64 version of Android Studio... Why? Why? Why?
ok, seriously... done with my rant.... I have it working and I've documented it all..... it only took 4 days, but here it is... (these instructions assume not previous installations have been attempted. I rebuilt from scratch multiple times to get the cleanest set of instructions that I could)
I've tested this installation/configuration on bookworm and trixie
wget -O ~/temp/android-sdk-tools-linux-35.0.2-aarch64.zip \
https://github.com/lzhiyong/android-sdk-tools/releases/download/35.0.2/android-sdk-tools-static-aarch64.zip
wget -q -O - \
https://redirector.gvt1.com/edgedl/android/studio/ide-zips/2025.1.3.7/android-studio-2025.1.3.7-linux.tar.gz \
| tar -C ~/.local/share -xzvf - \
--exclude 'android-studio/jbr/*' \
--exclude 'android-studio/lib/jna/*' \
--exclude 'android-studio/lib/native/*' \
--exclude 'android-studio/lib/pty4j/*'
wget -q -O - \
https://download.jetbrains.com/idea/ideaIC-2025.2.2-aarch64.tar.gz \
| tar -C ~/.local/share/android-studio \
-xzvf - \
--wildcards '*/bin/fsnotifier' \
'*/bin/restarter' \
'*/lib/jna' \
'*/lib/native' \
'*/lib/pty4j' \
--strip-components=1
wget -q -O - \
https://cache-redirector.jetbrains.com/intellij-jbr/jbrsdk_ft-21.0.8-linux-aarch64-b1138.52.tar.gz \
| tar -C ~/.local/share/android-studio/jbr \
-xzvf - \
--strip-components=1
mv ~/.local/share/android-studio/bin/studio ~/.local/share/android-studio/bin/studio.do_not_use
sed -i 's/amd64/\$OS_ARCH/' ~/.local/share/android-studio/bin/*.sh
sed -i 's/amd64/aarch64/' ~/.local/share/android-studio/product-info.json
reboot the linux shell and the laptop
sudo rebootlaunch android studio ~/.local/share/android-studio/bin/studio.sh
rm -rf ~/Android/Sdk/platform-tools/lib64 && unzip ~/temp/android-sdk-tools-linux-35.0.2-aarch64.zip -x build-tools/* -d ~/Android/Sdk/
unzip ~/temp/android-sdk-tools-linux-35.0.2-aarch64.zip -x platform-tools/* -d ~/Android/Sdk/build-tools/36.1.0
mv ~/Android/Sdk/build-tools/36.1.0/build-tools/* ~/Android/Sdk/build-tools/36.1.0
rmdir ~/Android/Sdk/build-tools/36.1.0/build-tools
file ~/Android/Sdk/build-tools/36.1.0/aapt2
android.aapt2FromMavenOverride=/home/rhkean/Android/Sdk/build-tools/36.1.0/aapt2
Ok, from cppreference
if pos < size(), or if pos == size() -> Returns a reference to CharT(), if the object referred by the returned reference is modified to any value other than CharT(), the behavior is undefined.
So, overall this code produces undefined behaviour, which just seems to be correct
The SimpleAggregateFunction stores the intermediate state of an aggregate function(eg: max, sum, count), but not its full state as the AggregateFunction (eg: avg, unique). AggregateFunction stores the entire values required to calculate the result.
Use SimpleAggregateFunction if f(R1 UNION ALL R2) = f(f(R1) UNION ALL f(R2)).
eg: sum(A1,A2,A3,B1,B2) = sum(sum(A1,A2,A3) , sum(B1,B2))
Use AggregateFunction otherwise
eg: avg(A1,A2,A3,B1,B2) != avg(avg(A1,A2,A3) , avg(B1,B2))
Note that SimpleAggregateFunction is faster, hence prefer the same over AggregateFunction if the requirement doesn't require full states for the result calculation.
In my case, i used the shortcut Ctrl + ; and it worked to comment out an entire block of code that I selected.
it says the api call inside cannot be taken place like in nextjs build. so just put return above the api call like . (before docker update the env for prodcution)
async function getCaseStudies(): Promise<CaseStudy[]> {
if(process.env.NODE_ENV === "Production"){
return [];
}
const res = await fetch(`${process.env.NEXT_PUBLIC_API_BASE_URL}/api/v1/case-studies/`, {
next: { revalidate: 600 }, // Revalidate every 600 seconds (10 minutes)
});
if (!res.ok) {
// This will activate the closest `error.js` Error Boundary
throw new Error('Failed to fetch case studies');
}
return res.json();
}
The core reason you're currently unable to connect is most likely not due to a configuration error, but rather to the cloud provider not actually opening the ODBC port (due to security policy restrictions).
The next step is to confirm with them whether they support direct external ODBC connections. If not, you'll need to use the API, export, or proxy.
I just had the same issue! The problem was that the lock files were inside the monorepo folders - they must be only in the root folder, as described in the documentation: https://opennext.js.org/aws/common_issues#cannot-find-module-next
Make sure you have a single lock file (package-lock.json, yarn.lock, or pnpm-lock.yaml) in the root of your project, not in the individual app directories within your monorepo.
Reversing words in text can be a fun way to create puzzles or encrypt messages. You can do this manually by writing the words backward, or use online tools for quick results. For an effortless approach, try the free tool at TadaTools, which allows you to reverse words instantly without any hassle. Just paste your text into the tool, click the reverse button, and see your words flipped in seconds. It’s perfect for educators, students, or anyone needing to manipulate text creatively. Explore this handy resource at How to Reverse Words in Text.
Right after this documentation section:
-XX:MaxMetaspaceSize=size option
-XX:MaxMetaspaceSize=sizeSets the maximum amount of native memory that can be allocated for class metadata. By default, the size isn't limited. The amount of metadata for an application depends on the application itself, other running applications, and the amount of memory available on the system.
The following example shows how to set the maximum class metadata size to 256 MB:
-XX:MaxMetaspaceSize=256m
In contrast, according to this documentation:
-XX:MaxMetaspaceSize=size setting| Setting | Description |
|---|---|
| -XX:MaxPermSize | Maximum size of the permanent generation. |
The maximum perm size should be set to 1024 Megabytes.
In addition, according to this documentation: Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning
Permanent Generation Size
The permanent generation does not have a noticeable impact on garbage collector performance for most applications. However, some applications dynamically generate and load many classes; for example, some implementations of JavaServer Pages (JSP) pages. These applications may need a larger permanent generation to hold the additional classes. If so, the maximum permanent generation size can be increased with the command-line option
-XX:MaxPermSize=<N>.
For a relevant read, see: What does -XX:MaxPermSize do?
The code in Arduino Core works because the BUTTON_PIN has the pull-up enabled with pinMode(BUTTON_PIN, INPUT_PULLUP);. While in the CMSIS code, this is not the case for the PA02 (BUTTON_PIN). So the EIC_handler will never be triggered.
Try to change the line
PORT->Group[0].PINCFG[BUTTON_PIN].bit.PMUXEN = 1;
to
PORT->Group[PORTA].PINCFG[BUTTON_PIN].reg |= (PORT_PINCFG_PMUXEN | PORT_PINCFG_PULLEN);
I didn't check the rest of the code. Let's know if this works. If this is not working, please elaborate what exactly not working as commented by @thebusybee.
Try replacing var with let. When using var, all onClick handlers reference the same variable that retains its final value after loop completion.
To anyone who is lost on this issue, especially with regards to neovim, I found the simplest solution.
Just add the lines to options.lua or any other config file.
vim.lsp.enable("dartls")
This makes use of the nvim-lspconfig plugin which is preinstalled.
And update your Lazyvim distro and Neovim instance.
I still don't have a comprehensive understanding on LSP, but this achieves the goal of effective navigation across my project and to and within Flutter source code.
the previous kotlin version was 1.8.x to 1.9.21
Found the solution. What I need to do is add the lookahead attribute to the request. Security() call with barmerge.lookahead_on selected.
Take a look at this
https://bootstrapstudio.io/docs/exporting.html#export-scripts
This should resolved the issue.
Found this post while looking for the same information. Finally just decided to trial-and-error it, the poor server.
wait 2 seconds
send "username<cr>"
wait 1 seconds
send "password<cr>"
The first wait just ensures it waits long enough for the 'welcome to server' username prompt. The second one is enough for the password prompt to appear.
The wait command seems really powerful is that you could possibly even create IF statements and loops based on what appears at a given point on the screen, ie a "if still showing a progress bar, wait a bit longer before continuing with input", but for this purpose a 'wait for 1 second' will do fine.
I am having same problem with Azure Notification Hub. @waro can you share the routing logic between hubs which works for you?
I am considering odd days route one hub and even days second hub. But need a scalable routing logic beyond two hubs.
For me, none of these answers helped, and it was probably just due to my own inexperience in Expo/mobile development.
I needed to run npx expo run:android to rebuild the app. Just doing npm i ... and npx expo start is not enough and will result in the missing RNCDatePicker error.
None of the above solved the "No Module" problem for me when I upgraded Gradle from V7.3 to V8.13.
The solution was to add a namespace declaration in the build.gradle file.
See <https://developer.android.com/build/configure-app-module#set-namespace>
Using the correction pointed out by @001,
angle = 360 / n_sides
your code correctly plots a hexagon.
To close the plot, click the Thonny stop icon.
I had this issue today myself, and realized that a solution might be possible using Reflection and then instantiating a new BaseClass object and then copying the values from the properties of the SubClass object onto the BaseClass object (in this so called extension method).
I won't post the code that solved this for me, but if you do need the code for the above design, try googling something like 'how do I convert an object from one type to another that have the same properties using reflection".
You cannot prevent that. Gmail only supports what it supports. Anything else gets stripped.
This is a fairly new CSS style so it evidently hasn't been whitelisted by Gmail.
However, you might like to try more standard responsiveness with @media queries, which do work (https://www.caniemail.com/features/css-at-media/).
I don't know if this will solve your issue, but I wanted to share in case it helps others. I was having the same issue. I had a gallery within a container, and the text inputs tabbed as expected, but the dropdowns and date pickers would be skipped. I had the DisplayMode property of the Gallery set to be in edit mode or view mode based on a variable within the app. I discovered that when I removed the code from the Gallery's DisplayMode property and set it to DisplayMode.edit, the tabbing worked as expected for all fields/input types. My workaround was to use the formula in the display modes of the inputs directly, rather than the Gallery as a whole. I'm not sure why this worked, but if anyone is facing this issue, check the DisplayMode property of the parent Gallery, Form, etc. to see if setting it to DisplayMode.edit resolves your problem.
You should be able to pass in true for assignable, and it will check for inheritance, assuming StringSerializer is a Serializer
I ran into a similar issue in Kotlin, but similar idea should apply:
DelegatingByTypeSerializer(
mapOf(
ByteArray::class.java to ByteArraySerializer(),
KafkaMessage::class.java to KafkaMessageSerializer(),
),
true
)
It was a safari/webkit issue.
The latest 26.1 beta from 22 September has fixed the issue. Now we just need to wait for it to be released or a patch to go before that.
After encountering the same problem myself, I checked the library's source code. You can see on line 157 here lib/actions/end_session.js, that you need to add parameter "logout" in “session/end/confirm” request to delete the session from storage and cookies.
AssetsLibrary is a fairly old library for accessing the users' photo. It has long been deprecated. You already know that. Apple introduced the Photos framework as a replacement.
Since Xcode 26 requires iOS 15 as the minimum supported deployment target, and the AssetsLibrary framework was deprecated long ago, it’s no longer available or recommended to use in newer versions of iOS and Xcode.
Quote Apple Staff, Quinn “The Eskimo" , he confirmed this is a bug:
This is obviously a bug and I encourage you to file a report about it. Please post your bug number, just for the record.
If your code does not implicitly import this AssetsLibrary. Then we have to check:
General -> Frameworks, Libraries, and Embedded Content)You can list the podfile.lock or Package.resolved so that we can have a further analysis.
In my own opinion, I ran into a performance bottleneck with H2o recently, here is my findings.
Basically, h2o.remove, is not just a quick python memory cleanup. This process kiks off a full on garbage collection process in the H2O backend which is running Java. This can suprisingly be slow, especially if you are doing it repeatedly.
few things that works better during internal cleanup are
Avoid removing frames inside your plotting function id possible
if the memory usage is not a huge concern, just skip h2o.remove entirely.
I recoment that id you really want to free up space, do it in batches at the end using h2o.remove_all(). and make sure you dont call h2o.remove on every single frame within a loop.
The variables you are using for you v-models seem to be not defined anywhere, thats why you are getting all these "undefined" results.
In your <script setup> you can define them like this:
const titleValue = ref('')
const categoryValue = ref('')
const dateValue = ref('')
const descriptionValue = ref('')
The default logstash config path is /usr/share/logstash/pipeline
Do you know the exact code I need to enter? Is it possible for you to give me the entire modified code to test? I'm a beginner, sorry.
thank for your help
You can use the library omni_video_player that have many property that fit your case
Specific setup:
Xbox series X connected to the AverMedia GC553 LiveGamer Usb device
Xbox is set to a forced resolution of 1920x1080 at 120hz
Using linux on pc with kernel 6.16.7-200.nobara.fc42.x86_64
The AverMedia GC553 shows up as /dev/video1 /dev/video2 and /dev/media0
This configuration presents the same issue, it shows "No such file or directory" when trying to open the video with ffplay /dev/video1
GCC553 starts in a weird sleep state when first connected, you need to poke it a few times for it to start up.
Figure out what device your Live Gamer is at
#: v4l2-ctl --list-devices | grep "Live Gamer" -A 3
Live Gamer Ultra-Video: Live Ga (usb-0000:10:00.3-2):
/dev/video3
/dev/video4
/dev/media0
Now poke the first device in that list a few times with v4l2-ctl -d /dev/video3 --stream-mmap --stream-count=1 --stream-to=/dev/null
If that returns VIDIOC_STREAMON returned -1 (No such file or directory) then run it again
When it works correctly it will return something else, in my case it returns <
Yes, that command does return the less-than symbol. I do not know why it only returns that. No there is nothing else returned. I understand that this sounds confusing.
Once you get < from that command the device is awake and ready and you can connect to it with ffplay /dev/video3
This works reliably when the device has been recently plugged in or my computer has been just turned on.
I find that ffplay without parameters will open the nv12/yv12 pixel format. I prefer to open the bgr24 one because it has a wider range of colors. To get the it to display correctly I use ffplay /dev/video3 -f v4l2 -pixel_format bgr24 -vf vflip where -vf vflip is needed because otherwise the image will display upside-down.
GCC553 gets corrupted after being connected for a long time, if this happens reconnect it by physically unplugging and plugging it back again.
I have not found a reliable way to reset the usb device without disconnecting it. If it the v4l2-ctl command is getting stuck you may need to reconnect the usb device. If I do I will modify this answer. Maybe a kernel mod removal and insertion could make the device reset but I have not tested that. The command usbresetdid not work for me, it just hangs.
I also asked in the Netlify support forum, and an engineer provided a workable reply: https://answers.netlify.com/t/magic-login-link-callback-redirect-is-not-working/156298. However, I decided to give up and deploy to Vercel instead after running into another issue with the auth cycle in my app.
So the solution is: deploy to Vercel, which worked perfectly.
have you looked at SK_SKB hook? It is called when a message is enqueued to the socket's receive queue. So, it has the same behavior as a socket program.
See modified pens/snippets with text scrolling from bottom:
* {
box-sizing: border-box;
}
@-webkit-keyframes ticker {
0% {
-webkit-transform: translate3d(0, 100%, 0);
/* start off screen, at 100% */
transform: translate3d(0, 100%, 0);
/* same as above */
visibility: visible;
}
100% {
-webkit-transform: translate3d(0, -100%, 0);
/* y instead of x, was: translate3d(-100%, 0, 0) */
transform: translate3d(0, -100%, 0);
/* same as above */
}
}
@keyframes ticker {
0% {
-webkit-transform: translate3d(0, 100%, 0);
/* same as above */
transform: translate3d(0, 100%, 0);
/* same as above */
visibility: visible;
}
100% {
-webkit-transform: translate3d(0, -100%, 0);
/* same as above */
transform: translate3d(0, -100%, 0);
/* same as above */
}
}
.ticker-wrap {
position: fixed;
top: 0;
/* new: align top */
left: 0;
/* instead of bottom: 0; */
height: 100%;
/* instead of width: 100%; */
overflow: hidden;
width: 4rem;
/* instead of height: 4rem; */
background-color: rgba(0, 0, 0, 0.9);
box-sizing: content-box;
}
.ticker-wrap .ticker {
display: inline-block;
width: 4rem;
/* instead of height: 4rem; */
line-height: 4rem;
white-space: nowrap;
box-sizing: content-box;
-webkit-animation-iteration-count: infinite;
animation-iteration-count: infinite;
-webkit-animation-timing-function: linear;
animation-timing-function: linear;
-webkit-animation-name: ticker;
animation-name: ticker;
-webkit-animation-duration: 30s;
animation-duration: 30s;
}
.ticker-wrap .ticker .ticker__item {
display: inline-block;
padding: 0;
/* or, if you want a gap between text disappearing and appearing again: */
/* padding: 2rem 0; */
/* instead of 0 2rem; */
font-size: 2rem;
color: white;
/* for text rotation: */
writing-mode: vertical-lr;
/* or vertical-rl, doesn't matter if you have one line */
/* from https://stackoverflow.com/a/50171747/15452072 */
}
body {
padding-left: 5rem;
}
/*h1,
h2,
p {
padding: 0 5%;
}*/
<h1>Pure CSS Ticker (No-JS)</h1>
<h2>A smooth horizontal news like ticker using CSS transform on infinite loop</h2>
<div class="ticker-wrap">
<div class="ticker">
<!-- more than one item do not show anyway, no idea why they were there -->
<div class="ticker__item">Lorem ipsum dolor sit amet, consectetur adipiscing elit.</div>
</div>
</div>
<p>So, annoyingly, most JS solutions don't do horizontal tickers on an infinite loop, nor do they render all that smoothly.</p>
<p>The difficulty with CSS was getting the animation to transform the entire items 100% yet include an offset that was only the width of the browser (and not the items full width).</p>
<p>Setting the start of the animation to anything less than zero (e.g. -100%) is unreliable as it is based on the items width, and may not offset the full width of the browser or creates too large an offset</p>
<p>Padding left on the wrapper allows us the correct initial offset, but you still get a 'jump' as it then loops too soon. (The full text does not travel off-screen)</p>
<p>This is where adding display:inline-block to the item parent, where the natural behaviour of the element exists as inline, gives an opportunity to add padding-right 100% here. The padding is taken from the parent (as its treated as inline) which usefully is the wrapper width.</p>
<p><b>Magically*</b> we now have perfect 100% offset, a true 100% translate (width of items) and enough padding in the element to ensure all items leave the screen before it repeats! (width of browser)</p>
<p>*Why this works: The inside of an inline-block is formatted as a block box, and the element itself is formatted as an atomic inline-level box. <br>Uses `box-sizing: content-box`<br>
Padding is calculated on the width of the containing box.<br>
So as both the ticker and the items are formatted as nested inline, the padding must be calculated by the ticker wrap.</p>
<p>Ticker content c/o <a href="http://hipsum.co/">Hipsum.co</a></p>
or with text scrolling from top
* {
box-sizing: border-box;
}
@-webkit-keyframes ticker {
/* additionaly, here we change the order of keyframes */
0% {
-webkit-transform: translate3d(0, -100%, 0);
/* y instead of x, was: translate3d(-100%, 0, 0) */
transform: translate3d(0, -100%, 0);
/* same as above */
}
100% {
-webkit-transform: translate3d(0, 100%, 0);
/* start off screen, at 100% */
transform: translate3d(0, 100%, 0);
/* same as above */
visibility: visible;
}
}
@keyframes ticker { /* same as above */
0% {
-webkit-transform: translate3d(0, -100%, 0);
/* same as above */
transform: translate3d(0, -100%, 0);
/* same as above */
}
100% {
-webkit-transform: translate3d(0, 100%, 0);
/* same as above */
transform: translate3d(0, 100%, 0);
/* same as above */
visibility: visible;
}
}
.ticker-wrap {
position: fixed;
top: 0;
/* new: align top */
left: 0;
/* instead of bottom: 0; */
height: 100%;
/* instead of width: 100%; */
overflow: hidden;
width: 4rem;
/* instead of height: 4rem; */
background-color: rgba(0, 0, 0, 0.9);
box-sizing: content-box;
}
.ticker-wrap .ticker {
display: inline-block;
width: 4rem;
/* instead of height: 4rem; */
line-height: 4rem;
white-space: nowrap;
box-sizing: content-box;
-webkit-animation-iteration-count: infinite;
animation-iteration-count: infinite;
-webkit-animation-timing-function: linear;
animation-timing-function: linear;
-webkit-animation-name: ticker;
animation-name: ticker;
-webkit-animation-duration: 30s;
animation-duration: 30s;
}
.ticker-wrap .ticker .ticker__item {
display: inline-block;
padding: 0;
/* or, if you want a gap between text disappearing and appearing again: */
/* padding: 2rem 0; */
/* instead of 0 2rem; */
font-size: 2rem;
color: white;
/* for text rotation: */
writing-mode: vertical-lr;
/* or vertical-rl, doesn't matter if you have one line */
/* from https://stackoverflow.com/a/50171747/15452072 */
/* and we want it the other way, from top to bottom, so we need to rotate: */
-webkit-transform: rotate(-180deg);
-moz-transform: rotate(-180deg);
transform: rotate(-180deg);
/* filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3); */
/* do not bother supporting IE, it's dead */
}
body {
padding-left: 5rem;
}
/*h1,
h2,
p {
padding: 0 5%;
}*/
<h1>Pure CSS Ticker (No-JS)</h1>
<h2>A smooth horizontal news like ticker using CSS transform on infinite loop</h2>
<div class="ticker-wrap">
<div class="ticker">
<!-- more than one item do not show anyway, no idea why they were there -->
<div class="ticker__item">Lorem ipsum dolor sit amet, consectetur adipiscing elit.</div>
</div>
</div>
<p>So, annoyingly, most JS solutions don't do horizontal tickers on an infinite loop, nor do they render all that smoothly.</p>
<p>The difficulty with CSS was getting the animation to transform the entire items 100% yet include an offset that was only the width of the browser (and not the items full width).</p>
<p>Setting the start of the animation to anything less than zero (e.g. -100%) is unreliable as it is based on the items width, and may not offset the full width of the browser or creates too large an offset</p>
<p>Padding left on the wrapper allows us the correct initial offset, but you still get a 'jump' as it then loops too soon. (The full text does not travel off-screen)</p>
<p>This is where adding display:inline-block to the item parent, where the natural behaviour of the element exists as inline, gives an opportunity to add padding-right 100% here. The padding is taken from the parent (as its treated as inline) which usefully is the wrapper width.</p>
<p><b>Magically*</b> we now have perfect 100% offset, a true 100% translate (width of items) and enough padding in the element to ensure all items leave the screen before it repeats! (width of browser)</p>
<p>*Why this works: The inside of an inline-block is formatted as a block box, and the element itself is formatted as an atomic inline-level box. <br>Uses `box-sizing: content-box`<br>
Padding is calculated on the width of the containing box.<br>
So as both the ticker and the items are formatted as nested inline, the padding must be calculated by the ticker wrap.</p>
<p>Ticker content c/o <a href="http://hipsum.co/">Hipsum.co</a></p>
Explanation of changes is in css comments
Follow this: https://capacitorjs.com/docs/ios/configuration#renaming-your-app.
But do not follow this: https://help.apple.com/xcode/mac/8.0/#/dev3db3afe4f
In other words, change the 'TARGETS' but do not change the name in the 'Identity and Type'. Leave the default name 'Ápp'.
If you have a look in the WithReference function of IResourceBuilder you will note the following code:
return builder.WithEnvironment(context =>
{
var connectionStringName = resource.ConnectionStringEnvironmentVariable ?? $"{ConnectionStringEnvironmentName}{connectionName}";
context.EnvironmentVariables[connectionStringName] = new ConnectionStringReference(resource, optional);
});
https://github.com/dotnet/aspire/blob/e9688c40ace2271cef6444722abdf2f028ee1229/src/Aspire.Hosting/ResourceBuilderExtensions.cs#L448-L465
So to override the environment variable set we just need to use the same .WithEnvironment function but set our own Custom name. Here is an Example of how it would look in your example:
orderApi
.WithReference(orderApiDatabase)
.WithEnvironment(context => // using custom place for Db ConnectionString
{
context.EnvironmentVariables["MyCustomSection__Database__OrderApi__ConnectionString"] =
new ConnectionStringReference(orderApiDatabase !.Resource, false);
});
.WaitFor(orderApiDatabase);
import org.apache.spark.sql.functions.{udf, struct}
val reduceItems = (items: Row) => {
10
}
val reduceItemsUdf = udf(reduceItems)
h.select(reduceItemsUdf(struct("*")).as("r")).show()
remove the web bundling in your app.json
this :
"web": {
"bundler": "metro",
"output": "server",
"favicon": "./assets/images/favicon.png"
},
For me I encountered this after downgrading from react19 to 18, my solution was:
specifically update @types/react dependency.
npm uninstall @types/react and npm install @types/reactAfter doing all this reopen the project in your text editor and the problem should be resolved
Altough I have not tried this myself yet, it is possible to use an extension to save tag IDs to a file and re-use them in another file (such as the supplemental file):
You have two ways to solve this issue>
1. from terminal cd to main.py location, in this case legajos_automaticos/src/, and from there, run again your command, since you already are inthe same place where the file is stored(it makes a issue diference for flet, trust me) because flet is not finding the file under legajos_automaticos.
2. form (.venv) PS C:\Users\ricar\proyectos_flet\legajos_automaticos> run the command this way>
flet run -d src/main.py
Good luck.
Jan9280
I can see 2 appraches:-
First one:- Enforce at the source (Dataverse security roles) — the real control
Create a Read-Only role for your target table(s):
Table permissions: Read = Organization, Create/Write/Delete = None, Append/Append To = None (adjust if they need lookups).
Create a Writer role for selected users:
Table permissions: Create/Write (and Append/Append To) = BU/Org as needed; Delete optional.
Assign the Writer role to a Dataverse Team that’s mapped to an AAD security group. Add/remove people in that AAD group to control who can write. Everyone else only gets the Read-Only role.
This way—even if someone finds a way to hit your flow—the write will fail if they don’t have Dataverse write permission.
Second one:- Make the flow run as the caller (not as you)
For your Instant cloud flow triggered from the Power BI button:
Open the flow → Details → Run-only users.
It seems like you're looking for a "one-in-all" answer. Maybe reworking/redoing one of your initial attempts may get you the answer, but I'm a fan of breaking things up. Personally, I have a work requirement related to expense tracking, so I've been researching OCR for mobile and found:
https://github.com/a7medev/react-native-ml-kit or the NPM link
With the extracted text, you could easily run a cheap/free server (AWS free-tier, Google Cloud free-tier, Heroku cheap) with a mini LLM and pass the extracted text and a text prompt to a server to get the heavy load off the user's mobile device.
Consider whether you truly want everything on a mobile device.
Even after quantizing a model, you'll still be looking at about 50-100MB of size alone (just for the model) which is a pretty large app. I believe Android's Google store has a limit of 150MB and then you have to do some funky file splitting (I think).
Restlets are probably your best bet. Another avenue is SOAP web services, which has bulk list operations. But that will be deprecated with update 2026.1
As far as what you call hydration, suite analytics connect and the relevant connect drivers are the way I would go. Same SuiteQL syntax, but better for large volumes of data.
hello any advanced api traders ,
the Postman postbot remarks that the error:400 on my X-VALR-API-KEY Header is the result of a trailing space before or after my token key and the next one . Is that true ?
help thanks .
Just dont use JOIN, use WHERE, like this:
delete from catalog.schema.table
where exists (
select 1
from tableWithRowsToDelete as D
join catalog.schema.table as O
ON O.col1 = D.col1
AND O.col2 = D.col2
)
Use username(without @) inplace of channelId it work for me, sadly for the username you have to make the channel public.
What about doing that?
#include <stdio.h>
#include <stdlib.h>
#include <cpuid.h>
int main() {
unsigned int eax, ebx, ecx, edx;
char vendor[13];
char brand[49];
if (__get_cpuid(0, &eax, &ebx, &ecx, &edx)) {
((uint*)vendor)[0] = ebx;
((uint*)vendor)[1] = edx;
((uint*)vendor)[2] = ecx;
vendor[12] = '\0';
printf("Vendor: %s\n", vendor);
}
brand[0] = '\0';
for (int i = 0x80000002; i <= 0x80000004; i++) {
if (__get_cpuid(i, &eax, &ebx, &ecx, &edx)) {
uint *p = (uint*)(brand + (i - 0x80000002) * 16);
p[0] = eax; p[1] = ebx; p[2] = ecx; p[3] = edx;
}
}
brand[48] = '\0';
printf("CPU Name: %s\n", brand);
uint maxLeaf = __get_cpuid_max(0, NULL);
if (maxLeaf >= 4) {
__cpuid_count(4, 0, eax, ebx, ecx, edx);
uint coresPerPkg = ((eax >> 26) & 0x3F) + 1;
printf("Cores per package: %u\n", coresPerPkg);
}
if (maxLeaf >= 1) {
__get_cpuid(1, &eax, &ebx, &ecx, &edx);
uint baseMhz = eax & 0xFFFF;
uint maxMhz = ebx & 0xFFFF;
printf("Base clock: %u MHz\nMax clock: %u MHz\n", baseMhz, maxMhz);
}
if (maxLeaf >= 4) {
int i = 0;
while (1) {
__cpuid_count(4, i, eax, ebx, ecx, edx);
uint cacheType = eax & 0x1F;
if (cacheType == 0) break;
uint level = (eax >> 5) & 0x7;
uint ways = ((ebx >> 22) & 0x3FF) + 1;
uint partitions = ((ebx >> 12) & 0x3FF) + 1;
uint lineSize = (ebx & 0xFFF) + 1;
uint sets = ecx + 1;
uint size = ways * partitions * lineSize * sets / 1024;
printf("L%u cache size: %u KB\n", level, size);
i++;
}
}
return 0;
}
Expected Output:
Vendor: GenuineIntel
CPU Name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
Cores per package: 8
Base clock: 1729 MHz
Max clock: 2048 MHz
L1 cache size: 48 KB
L1 cache size: 32 KB
L2 cache size: 1280 KB
L3 cache size: 8192 KB
The most common way is to wrap your command in a retry loop inside your PowerShell or Bash script, where you can check the attempt number and add Start-Sleep between tries.
You should use the value={selectedColor} prop instead of defaultValue. That makes the Select “controlled” so it will keep focus on the selected option even after re-renders.
Hey I also want to create a site similar to one like that but I really don't know how to embed such ruffle games, is there any way I can find out how to?
I know this was solved 13 years ago, but I would like to reemphasize what gbulmer said:
If this is an interview question (and not on a closed-book test,) you should be asking questions. The interview actually has 3 goals:
to see if you know the tech (the most obvious test, if you can't perform the task, you fail)
to see if you need excessive hand-holding (if you ask dozens of questions you will fail this one,)
to see if you will verify unspoken assumptions (if you ask 0 questions, you will fail this one instead)
The task looks neat and tidy, but it is actually ridiculously broad. Here are the questions you need to ask, before starting on your task:
What does "safe" mean?
Should the data structure be type-safe? (and how do I handle garbage data?)
Should it be thread-safe? (or can I assume only one process will ever use it?)
Are there any additional "safety features" you need? (security, error correction, backups. They should say "no", but it doesn't hurt to ask.)
What does "efficient" mean?
Should you prioritize time or space?
Should you prioritize saving numbers or retrieving numbers?
What does "a phone book" mean?
Can numbers be longer than 8 digits (18 on a 64 bit system?)
Can numbers have additional and symbols in them (-,+,#, and space are likely) and if so, should these numbers be reproduced as written, stripped down to a sequence of digits, or reconstituted into a specific format?
Can people's names consist entirely of numbers (and whatever additional symbols we designated in question 3.2?)
Are there future plans to expand the phone book with additional fields (addresses for example) or can you assume that a name-to-number correspondence is all that will ever be needed?
Can contacts be modified?
A name assigned a new number?
A number assigned a new name?
The whole contact be deleted?
Having claridifed the task, you can proceed. Assuming the answers are: type-safe, but not thread-safe and the structure should return blank name or 0 when the input is incorrect, but never throw exceptions. Prioritize time and retrieval. Numbers can be as long as the user wants, but all numbers with the same digits are considered equal, names will include at least one letter and contacts will be deleted when they become obsolete, but no further modification will occur. A possible solution can do the following:
The data structure will expose 5 methods:
boolean AddContact(string name, string number)
string FindNumber(string name)
string FindName(string number)
boolean DeleteByName(string name)
boolean DeleteByNumber(string number)
Internally it will consist of a HashMap (we are guaranteed no collisions between numbers and names, so one is enough) and a few helper methods.
Sample implementation here: https://dotnetfiddle.net/JWEUPi
In Android Studio Ladybug 2024.2.1 or IntelliJ IDEA, this error can happen even if you have Java 21 installed and enabled by default. For example, you could set your $JAVA_HOME environment variable to use the JDK that comes from Android Studio, using this guide Using JDK that is bundled inside Android Studio as JAVA_HOME on Mac :
# For ~/.bash_profile or ~/.zshrc
export JAVA_HOME="/Applications/Android Studio.app/Contents/jbr/Contents/Home"
But for some reason, an Android project that declares that it needs Java 17 in a build.gradle file cannot be compiled with Java 21.
java {
sourceCompatibility JavaVersion.VERSION_17
targetCompatibility JavaVersion.VERSION_17
}
kotlin {
jvmToolchain(17)
}
You'll see an error like this when you try to build the app:
org.gradle.jvm.toolchain.internal.NoToolchainAvailableException:
No matching toolchains found for requested specification:
{languageVersion=17, vendor=any, implementation=vendor-specific} for MAC_OS on aarch64.
Could not determine the dependencies of task ':app:compileKotlin'.
> No locally installed toolchains match and toolchain download repositories have not been configured.
The solution is to download a specific Java 17 JDK/SDK manually, and make your project use it:
I'm experiencing the same issue, and everything I've tried isn't working. Works fine on iOS though...
Interesting... I tried to rebuild the code in "Release" mode and everything works nicely. Still does not work in Debug mode.. weird
I created an NPM package google-maps-vector-engine to handle PBF/vector tiles on Google Maps, offering near-native performance and multiple functionalities. I recommend giving it a try.
my issue was that i had a lot of unsaved tabs that i was not sure i will keep.
in my case i have to use a different user for that task and .
i just force kill the specific session \ process ID and re lunched it , on re lunched it showed recovery option. that solved my issue.