Thank you everyone for your comments and answers. I found out that it was VS Code all along, refreshing the same log file to which I was adding data. I think with the latest update VS Code started to do that as I have not see it auto refresh while open.
I added
$wgMainCacheType = CACHE_ACCEL;
$wgSessionCacheType = CACHE_DB;
which did not help then I realised I already had
## Shared memory settings
$wgMainCacheType = CACHE_NONE;
$wgMemCachedServers = array();
CACHE_NONE is recommended above. I tried both ACCEL and NONE.
I deleted cookies. I don't know how to access the database. I still can't log in on one browser (my main browser) but I can see the wiki content.
I used a regular expression to solve this issue. In Polars you signal regular expressions with the start and end symbols ^ and $ and the * needs to be escaped so the full solution looks like
python
import polars as pl
df = pl.DataFrame({"A": [1, 2], "B": [3, None], "*": [4, 5]})
print(df.select(pl.col(r"^\*$"))) # --> prints only the "*" column
There are two locations for this information. In some cases, you might need to look in both places (try primary first. If missed, try alternate).
Primary location:
host.hardware.systemInfo.serialNumber
Alternate location:
host.summary.hardware.otherIdentifyingInfo
In some of my systems, I cannot find the tags in the primary and traversing the alternate location helps find it. But between those two locations, I have always been able to get the tags. It might be a bit tricky to fish the info out. The following code should help.
if host.summary.hardware and host.summary.hardware.otherIdentifyingInfo:
for info in host.summary.hardware.otherIdentifyingInfo:
if info.identifierType and info.identifierType.key:
key = info.identifierType.key.lower()
if "serial" in key or "service" in key or "asset" in key:
if info.identifierValue and info.identifierValue.strip():
serial_number = info.identifierValue
I love you, i have expend a day debbuging the framework.. for this simple thing.. T.T
have you solved this problem, I encountered the same question.
If your looking for a C implementation (usable with C++) that handles UTF-8 quite well and is also very small, you could also have a look here:
How to uppercase/lowercase UTF-8 characters in C++?
These C-functions can be easily wrapped for use with std::string.
I'm not saying this is the most robust way, after all, all the problems with std::string will remain, but it could be helpful in some use cases.
Stop scrolling! Ini yang lagi hype! jo 777 banyak menangnya!
Is there any API can I use with C++ to do this?
No, there is no API to perform this task.
Microsoft's policy is that such tasks must be performed by the user using the provided Windows GUI.
Explaining the use case: If you are doing Data Augmentation, then usually following sequence wiil work,
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1)
]
)
If it's the imperative approach you're after, the original loop should do just fine.
Probably just my style preference -- I prefer having the logic in the accumulator -- seem more like the imperative solution.
What would git(1) do? Or, What does git(1) do?
! tells it to run your alias in a shell. What shell? It can’t use a
specific one like Ksh or Zsh. It just says “shell”. So let’s try /bin/sh:
#!/bin/sh
mvIntoDIR() {
cd ${GIT_PREFIX:-.};
allArgsButLast="${@:1:$#-1}";
lastArg="${@: -1}";
git mv -v $allArgsButLast $lastArg/;
git commit -uno $allArgsButLast $lastArg -m "Moved $allArgsButLast into $lastArg/"; \
};
mvIntoDIR
We get the same error but a useful line number:
5: Bad substitution
Namely:
allArgsButLast="${@:1:$#-1}";
Okay. But this is a Bash construct. So let’s change it to that:
#!/usr/bin/env
mvIntoDIR() {
cd ${GIT_PREFIX:-.};
allArgsButLast="${@:1:$#-1}";
lastArg="${@: -1}";
git mv -v $allArgsButLast $lastArg/;
git commit -uno $allArgsButLast $lastArg -m "Moved $allArgsButLast into $lastArg/"; \
};
mvIntoDIR
We get a different error:
line 5: $#-1: substring expression < 0
Okay. So git(1) must be running a shell which does not even know what
${@:1:$#-1} means. But Bash at least recognizes that you are trying to use a construct that it knows, even if it is being misused in some way.
Now the script is still faulty. But at least it can be fixed since it is running in the shell intended for it.
I would either ditch the alias in favor of a Bash script or make a Bash script and make a wrapper alias to run it.
If you don't want to map(), you could replace the accumulator with
(map, foo) -> {
map.put(foo.id(), foo);
map.put(foo.symbol(), foo);
}
But at this point it's hard to see how streaming is an improvement over a simple loop. What do you have against map() anyway?
Your order of parentheses are wrong:
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size)), y)
Should be:
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))
This should be possible, depending on what your data looks like. You may need to use transformations to split the data into multiple series. Do you have an example of your data?
I threw this State Timeline together with the following CSV data, not sure how close it is to what you have:
Start, End, SetID, ModID, ModType
2025-01-01 01:00:00, 2025-01-01 01:15:00, 100, 101, Set
2025-01-01 01:00:00, 2025-01-01 01:01:00, 100, 102, Alpha
2025-01-01 01:02:00, 2025-01-01 01:04:00, 100, 103, Beta
2025-01-01 01:05:00, 2025-01-01 01:25:00, 110, 111, Set
2025-01-01 01:05:00, 2025-01-01 01:08:30, 110, 113, Alpha
2025-01-01 01:07:00, 2025-01-01 01:12:00, 110, 115, Gamma
Transformations:
Format the times correctly
The main one is Partition by values to split the data into separate series based on SetID & ModID
That should get you chart you want, but you'll want to add some settings so the names don't look weird and to color the bars how you like:
Display Name: Module ${__field.labels.ModID} to convert from ModType {ModID="101", SetId="100"}
Value Mappings: Set -> color Yellow , Alpha -> Red, Beta -> Green, etc.
You can use trim() modifier to cut Shape to half
Circle()
.trim(from: 0, to: 0.5)
.frame(width: 200, height: 200)

Learn more from this link.
https://github.com/phoenix-tui/phoenix - High-performance TUI framework for Go with DDD architecture, perfect Unicode, and Elm-inspired design. Modern alternative to Bubbletea/Lipgloss. All function key support from the box!
Maybe your Layers config are not set up in the way you think. If you search for it, you can get a surprise (Project settings -> Physics (or Physics 2D).
For c++ it's this:
-node->get_global_transform().basis.get_column(2); // forward
node->get_global_transform().basis.get_column(0); // right
node->get_global_transform().basis.get_column(1); // up
https://pub.dev/packages/web_cache_clear
I made this package now because i needed it too. It assumes you have a backend where you can update your version number but every time the page loads it will check the session version too the backend version. If its not the same it will clear the cache storage and reload the page.
at integrated terminal or mac os terminal does not matter, just write: su and enter and input pass. After become root install "npm install -g nodemon" It is worked with me with this way.
What is @{n='BusType';e={$disk.bustype}}? AI gave me similar examples, I just barely understand it. Seems n & e as shortcuts for Name & Expression in so called Calculated Property Syntax.
@{Name='PropertyName'; Expression={Script Block}} or @{n='PropertyName'; e={Script Block}}.
AI suggested an example:
Get-ChildItem -File | Select-Object Name, @{n='SizeMB'; e={$_.Length / 1MB}}
demonstrating exactly what I desired to archive, then why does @{n;e} act strangely in Select-Object?
This is due to the fact of a (not so?) recent change in collapsing margins. In a block layout, the margin-bottom and margin-top of the heading elements collapses (only one margin is applicable), but in a flex layout, the margins are not collapsed. So, what you see in the flex layout is all the margins accounted for.
Try removing margin-top or margin-bottom for your needs. You can read more about margins here: https://www.joshwcomeau.com/css/rules-of-margin-collapse/ or at mdn: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_box_model/Mastering_margin_collapsing
You could use process tracking instead
vrrp_instance Vtest_2 {
[...]
track_process {
check_haproxy
}
}
The tracking block would look like this
vrrp_track_process check_haproxy {
process haproxy
weight 10
}
This way you dont need a separate script running.
For those that are facing the same problem, please check this configuration under:
File > Project Structure > Project Settings > Module
Screenshot:
This is still happening 6.6 years later Microsoft SUCKS
You can use a Navigator widget or a named route to achieve this https://docs.flutter.dev/ui/navigation#using-the-navigator
If you intend displaying as a pop-up modal, then refer to the example on showDialog https://api.flutter.dev/flutter/material/showDialog.html#material.showDialog.1
As per your suspect yes i agree that aggregation query is quiet inefficient. I am not sure if you have supportive index to these?
Base collection from where you running aggregate query should have below index: { "seller_company_id": 1,'meetings.is_meeting_deleted':1, "is_deleted": 1,"is_hidden": 1 }
It is ok if you dont have 'meetings.is_meeting_deleted':1 in index as it is multikey.
And for the joining collection Company _id default index is sufficient.
Seems the CPU utilisation is pretty high 100% and as per the real time tab screenshot there seem lot of getMore running. I believe it is either Atlas search or the change stream. Can you help with what are the most common getMore command running?
with above info we can find some clue.
thanks, Darshan
use https://www.svgviewer.dev/ and insert the code of the xml and download it and import it (worked for me)
Please note that, In MongoDB, when you drop a collection or index, it may not be deleted immediately. Instead, it enters a "drop-pending" state, especially in replica sets, until it is safe to remove—after replication and commit across nodes. The "ObjectIsBusy" error means MongoDB cannot proceed because the object, such as an index, is still active or not fully deleted.
This status is internal and usually clears automatically once MongoDB finishes its background cleanup.
As you said it is fresh mongo that curious me if there was already older version of mongod ran and abandoned? If it is fresh you can clear the dbpath and try start them
Thanks, Darshan
!4 value filled in property worked for me
Figured out that you can get data out of the embedded report through the Javascript client for powerbi we were able to use this to get the users filter selections. Also we were able to add the users email address to the Row Level Security and implement it in the Reports to only show them content they were able to see.
The ErrorLevel is equal to -1 when the volume is crypted
Your can do the following command for exemple to uncrypte the volume :
manage-bde -status c: -p || manage-bde -unlock c: -RecoveryPassword "Key-Key..."
One useful resource is the AWS latency matrix from Cloud Ping.
You can use this website to answer these questions : https://www.dcode.fr/javascript-keycodes
Functions for component wise operations has a prefix cwise in Eigen. cwiseQuotient performs the component wise division.
Agree. It's not the same when you are exporting to CSV. Some columns need to be adjusted. In mi case, so many columns containing numerical Ids that most of the time start with ceros. Excel delete those ceros and make the column numeric and worse put the number in scientific notation.
@ tibortru answer works. I had a requirement where the requirement was to run scheduled spring batch job a lot more often in the test environments. I achieved this like so in the application-test.yml
batch
scheduler:
cron: 0 ${random.int[1,10]} 7-23 * * *
And referenced it like so:
@Scheduled(cron = "${batch.scheduler.cron}")
public void runJob()
Azure SQL supports synonyms for cross-database access but only on the same server.
"Four-part names for function base objects are not supported."
"You cannot reference a synonym that is located on a linked server."
I encountered this while trying to download a file using Lambda from S3.
For my scenario, I did the following steps:
Go to IAM -> Roles -> Search for Lambda's role (you can find it in Lambda -> Permissions -> Execution role -> Role name)
Click Add permissions -> Create inline policy
Choose a service -> S3
In the Filter Actions search bar look for GetObject -> Select the checkbox
In Resources click on Add ARNs in the "object" row
Add bucket name and the resource object name if needed - if not, select Any bucket name and/or Any object name. Click Add ARNs
Click Next -> Add a Policy name
Click Create policy
[
[29/10, 9:01 pm] Karam Ali Larik🙂↔️😔🥺💫: [email protected]
add
<uses-permission android:name="android.permission.WRITE_SETTINGS" />
in your androidmanifest.xml
The solution I've often employed in this type of scenario makes use of cfthread and some form of async polling.
Without a concrete example, I'll try and outline the basic mechanics...
User submits request.
A Unique ID is generated.
A <cfthread> begins, handling the long-running request and writing status update to a session variable scoped by the Unique ID.
The Unique Id is returned to the user, and they are directed to a page that uses JS to poll some endpoint that will read the session-scoped status information.
I've used XHR polling, and event-stream based solutions here - but the principle holds whichever technique you employ.
Encountered the same problem today and was pretty lost.
It seems it was due to mismatches in the package versions between the apps and the library.
In the end I ran `pnpm dlx syncpack fix-mismatches` (or run `pnpm dlx syncpack list-mismatches` first to see what changes will be applied) and the problem was solved.
linkStyle={{
backgroundColor: x + 1 !== page ? "#ffffffff" : "",
color: x + 1 !== page ? "#000000ff" : "",
borderColor: x + 1 !== page ? "#000000ff" : "",
}}
add this inline css or make custom css in index.css, it will resole the issue
Did you find a solution, brother? We all are in the same boat here.
I installed a php package that allows you within your composer.json file to configure a path that copies vendor assets to a public directory.
If you'd like to explore Deep Web, the link below is one of the best doorways to start your journey!
NOTE: ONLY FOR EDUCATIONAL PURPOSE!
I had my apt sources messed up my bad
Sorting the filenames before generating their arrays / hashes fixed it.
I know I'm late, but I just stumbled upon the same issue. I'm using OpenPDF-1.3.33, and by default cell's element is aligned to the left.
You need to make
p.setAlignment(Element.ALIGN_CENTER); // not Element.ALIGN_MIDDLE
Problem solved. I had a subsequent:
parameters.to_sql("Parameter", con=connection, if_exists='replace', index=False)
That replaces the TABLE during data import - not existing ROWS.....
Anyway, thanks for your feedback!
I encountered this exact issue and resolved it. The 10-second delay is caused by IPv6 connectivity being blocked in your Security Groups.
Service A: Fargate task with YARP reverse proxy + Service Connect
Service B: Fargate task with REST API
Configuration: Using Service Connect service discovery hostname
VPC: Dual-stack mode enabled
**Add IPv6 inbound rules to your Security Groups
This is root cause as per Claude AI:**
When using ECS Service Connect with Fargate in a dual-stack VPC:
Service Connect attempts IPv6 connection first (standard .NET/Java behavior per RFC 6724)
Security Group silently drops IPv6 packets (if IPv6 rules aren't configured)
TCP connection times out after exactly 10 seconds (default SYN timeout)
System falls back to IPv4, which succeeds immediately
For anyone looking at this more recently:
In scipy version 1.16, and presumably earlier, splines can be pickled and the code in the question works without error.
Probably you need to remove the current connection and add it again.
@jared_mamrot
Question: I want to calculate elasticities, so I am using elastic().
ep2 <- elastic(est,type="price",sd=FALSE)
ed <- elastic(est,type="demographics",sd=FALSE)
ei <- elastic(est,type="income",sd=FALSE)
ep2 and ed are working, but when type="income", running ei shows an error:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
in AWS service connect. it will try to resolve first IPv6 and it failed then it will go IPv4.
In fail case it will hold 10s.
to resolve check security group and your allow IPv6 port
Thank you so much! it solved the problem after 2 entire days struggling. Here is minor tweak:
curl 'https://graph.facebook.com/v22.0/YOUR_PHONE_NUMBER_ID/register' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_ACCESS_TOKEN' \
-d '{
"messaging_product": "whatsapp",
"pin": "000000"
}'
As said above, maps are not safe inside go routines like this.
The simplest way to modify your example is to add a mutex and perform lock/unlock on each pass.
package main
import (
"fmt"
"sync"
)
type Counter struct {
m map[string]int
mu sync.Mutex
}
func main() {
counter := Counter{m: make(map[string]int)}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
key := fmt.Sprintf("key%d", id)
counter.mu.Lock()
defer counter.mu.Unlock()
counter.m[key] = id * 10
}(i)
}
wg.Wait()
fmt.Println(counter.m)
}
This should help demonstrate the issue. But there are other ways to handle this, with read/write mutex or a channel. Note that using locks may slow down the program, compared to other methods.
More discussion: How safe are Golang maps for concurrent Read/Write operations?
This appears to be a recent issue with Anaconda, and on the Anaconda forum, this solution has worked:
Could you try updating conda-token by running:
conda install –-name base conda-token=0.6.1
I think you should use below style to access the nested variables in scripts
doc[“attributes”][“jewelry_v2”][“keyword”]
Or use Ctx style
ctx.attributes.jewelry_v2.keyword.value
look at this post to better understand character sets(unicode) and encoding(utf-8)
I downgraded DotVVM.AspNetCore from 5.0.0 to 4.3.9 and now it works fine.
Writing to the log file from multiple processes will work fine, until the maximum log file size is reached and the RotatingFileHandler attempts to rotate the log file. Rotating the log file is done by renaming the log file (appending .1 to the file name). This will not be possible because the file will be in use (locked) by other processes.
So the reason im getting forbidden is because despite my channel being partnered, i still have to request access to the API. Not entirely sure where, but thats why im getting 403.
bump - asking the same question
I just found out the issue.
Business-initiated calling is not available for U.S. phone numbers in the Meta (WhatsApp Business Cloud API).
That’s why the API returns:
(#138013) Business-initiated calling is not available
So if you're testing with a U.S. WhatsApp number (starting with +1), you won’t be able to send call permission requests or start business-initiated calls.
Switching to a phone number from a supported country resolves the issue.
I found the issue. in the orm.xml, I had to use
org.axonframework.eventhandling.AbstractDomainEventEntry
for the attribute config instead of
org.axonframework.eventhandling.AbstractSequencedDomainEventEntry
Just restating Jenkins fixed that for me.
It is a disassembly of try-catch blocks.
Please check this out: __unwind
@Tobias, Oracle explicitly permits custom information in the manofest
Modifying a Manifest File
You use the m command-line option to add custom information to the manifest during creation of a JAR file. This section describes the m option.
For me, the answer from @devzom actually worked. After a little research into it I found that by default, Astro uses static site generation (SSG) where pages are pre-rendered at build time. In default ('static') mode, routes with dynamic query parameters don't work properly because there's no server running to handle the requests at runtime.
By adding output: 'server', you're telling Astro to run in Server-Side Rendering (SSR) mode, which means there's an actual server processing requests, so dynamic routes with query parameters work correctly.
In the file astro.config.mjs
export default defineConfig({
output: 'server',
});
I then put my code up on DigitalOcean and had to install the following adapter:
npm install @astrojs/node
(or in my case, since I was using an older theme: npm install @astrojs/node@^8.3.4 )
Then your astro.config.mjs looks like:
import node from '@astrojs/node';
export default defineConfig({
output: 'server',
adapter: node({
mode: 'standalone',
}),
});
There are also hosting adapters for Vercel, Netlify, and Cloudflare
https://docs.astro.build/en/guides/on-demand-rendering/
As others have mentioned, setting output to 'hybrid' in astro.config.mjs will work, but you will need to add "export const prerender = false;" at the top of the page that you want to get the query parameters for, basically this is telling the file that the route in that file needs SSR.
All your routes will be served through that __fallback.func. What makes you think that they aren't being served? What happens when you access a route?
Installing binutils package in the Alpine Docker image solved this handshake issue for me.
I saw this solution in alpine/java image. Here's the link to its Dockerfile
The error is that jalali_date uses distutils, but in Python 3.12, distutils is removed and should be replaced with setuptools.
pip install setuptools
Thank you to everyone that replied. I think this is more than enough feedback to work with. I clearly still have a lot to learn. Also, apologies to @AlexeiLevenkov for not putting this in code review. I will look at using table lookups and animating palettes. For anyone else that stumbles on my thread I had a bug in my code where it was clipping because the seconds were looping back around. The fix was to create a frameCounter variable and increment it every time Invalidate() was called. Use that instead of the DateTime.Now.Second otherwise it will not be smooth.
There is actually a way to achieve this. In cases such that you are packaging multiple executable in a single package, this option will be helpful. I've verified this with pyinstaller==6.15.0
exe = EXE(....,
....,
contents_directory='_new_internal'
)
Planner Premium uses Project schedule APIs. The official documentation is here:
But I'm not familiar with that APIs.
okay, thanks bro. I tried changing the MARGIN attribute to 3, and now the LABEL can be centered!
here are some clarifications for this request:
Step1: Data
Let's assume you have a factory that produces 3 products (A,B,C). Each product goes through 3 production steps (Stamping, Assembly, Test). You measure for each product the cycle time per production step in minutes. The data looks like this:
products=['Product A', 'Product A','Product A','Product A','Product A','Product B','Product B','Product B','Product B','Product B', 'Product C','Product C','Product C','Product C','Product C']
production_step=['Start', 'Stamping', 'Assembly','Test','End','Start', 'Stamping', 'Assembly','Test','End','Start', 'Stamping', 'Assembly','Test','End','Start', 'Stamping', 'Assembly','Test','End','Start', 'Stamping', 'Assembly','Test','End']
cycle_time=[0,150,100,170,0,0,130,80,100,0,0,100,90,120,0]
data = list(zip(products, production_step, cycle_time))
df=pd.DataFrame(data, columns=['Product', 'Production Step', 'Cycle Time (min)'])
Note: I artificially added a "Start" and "End" time of 0 so the area charts in the next Step 2 resemble density plots.
Step 2: Visualization
fig=go.Figure()
fig.add_trace(go.Scatter(x=df["Production Step"], y=df["Cycle Time (min)"], line_shape='spline', fill='tozeroy'))
fig.show()
Step3: Make it look like a ridge plot
This is where I'm stuck. What is missing is a vertical offset and the Product names appearing along the y-axis.
Do you have any ideas how to make this happen?
Pausing briefly to soak in the view of migratory birds across the marshy land, Jadhav continued to share the best practices that the O&M teams follow at the company’s different projects. In the initial phase of the project, the O&M team is involved in devising the costing model that is provided to the bidding team. The technology parameters are then planned, before selecting the equipment. When the construction phase begins, the O&M team is deployed at the site to supervise the quality of work and ensure smooth handover of the plant, in addition to engaging third-party players. This ensures that the plant starts generating electricity to its full commissioned capacity to avoid generation loss. The team also develops a breakdown impact analysis and mitigation matrix for all equipment in the solar plant. This serves as a guide for responding to equipment failures and outlining specific actions to be taken for each type of issue. “Before the plant begins operations, we have a certification programme called the “Competency Matrix”, which has three levels – competent, conquerer and champion. All employees at the site must pass the exam and have to undertake necessary certification according to their job profile. Until they get this certification, they are not allowed to work independently at the site,” says Jadhav.
Please let me suggest a slightly different approach for your problem. I hope it is working for you.
Spoiler: While trying to reproduce your problem, I found something that I think is missing in PluginClassLoader for handling dynamically loaded jars: It adds all URLs of the jars shipped as dependency with the plugin, but not the URL of the jar itself. This can be fixed. Let me show you how:
First my code to reproduce your problem. You didn't share your actual code, but I used the description from here to start: https://alef1.org/jean/jpf/
I created a simple plugin, which does actually nothing, just log messages when started and stopped.
public class SamplePlugin extends Plugin {
@Override
protected void doStart() {
System.out.println("Plugin started");
}
@Override
protected void doStop() {
System.out.println("Plugin stopped");
}
}
<?xml version="1.0" ?>
<!DOCTYPE plugin PUBLIC "-//JPF//Java Plug-in Manifest 1.0" "http://jpf.sourceforge.net/plugin_1_0.dtd">
<plugin id="org.example.string" version="0.0.4"
class="org.example.SamplePlugin">
</plugin>
And a Main class, which loads that plugin from a jar file in my case located under maven-pg-plugin/target:
public class Main {
public static void main(String[] args) throws IOException, JpfException {
ObjectFactory objectFactory = ObjectFactory.newInstance();
PluginManager pluginManager = objectFactory.createManager();
File pluginDir = new File("./maven-pg-plugin/target");
File[] plugins = pluginDir.listFiles((dir, name) -> name.endsWith(".jar"));
if (plugins != null) {
PluginManager.PluginLocation[] locations = Arrays.stream(plugins).map(Main::createPluginLocation)
.toArray(PluginManager.PluginLocation[]::new);
Map<String, Identity> result = pluginManager.publishPlugins(locations);
for (Map.Entry<String, Identity> entry : result.entrySet()) {
Identity identity = entry.getValue();
String pluginId = identity.getId();
pluginManager.activatePlugin(pluginId);
pluginManager.deactivatePlugin(pluginId);
}
}
}
private static PluginManager.PluginLocation createPluginLocation(File file) {
try {
return StandardPluginLocation.create(file);
} catch (MalformedURLException e) {
// replace RuntimeException with something more suitable
throw new RuntimeException(e);
}
}
}
When running, the output of the sample plugin should be seen on the console.
That code works when the jar is also on the classpath, but stops working when removed from the classpath. The behaviour is the same with Java 8 as well as with newer versions.
Since PluginClassLoader is also an instance of URLClassLoader, i could fix it by calling addURL (which is protected) via reflection and add the jar of the plugin itself:
private static void fixPluginClassLoader(PluginManager pluginManager, String pluginId) {
PluginDescriptor descriptor = pluginManager.getRegistry().getPluginDescriptor(pluginId);
// retrieve the URL to the plugin jar through JPF
URL url = pluginManager.getPathResolver().resolvePath(descriptor, "");
PluginClassLoader pluginClassLoader = pluginManager.getPluginClassLoader(descriptor);
try {
Method method = URLClassLoader.class.getDeclaredMethod("addURL", URL.class);
method.setAccessible(true);
method.invoke(pluginClassLoader, url);
} catch (ReflectiveOperationException e) {
// replace RuntimeException with something more suitable
throw new RuntimeException(e);
}
}
And insert the call to that method before using the plugin:
for (Map.Entry<String, Identity> entry : result.entrySet()) {
Identity identity = entry.getValue();
String pluginId = identity.getId();
fixPluginClassLoader(pluginManager, pluginId); // <-- addition here
pluginManager.activatePlugin(identity.getId());
pluginManager.deactivatePlugin(identity.getId());
}
Tadaa! 🎉 For Java 11 this fix works out of the box, the jar doesn't need to be on the classpath and no extra command line arguments.
However with newer Java versions with extended module checks the command line argument --add-opens needs to be added to access the protected method of URLClassLoader:
--add-opens java.base/java.net=ALL-UNNAMED
(using ALL-UNNAMED assuming you are not actively creating Java modules)
Two notes of warning:
jpf-1.5.jar, I see eight of them on jpf-boot-1.5.jar. If you need it for production code, please consider migrating to something better maintained, e.g. an OSGi implementation.Because of this delay, Kafka assumed the consumer was dead and revoked its partitions.
To fix it:
Make sure your processing inside poll() isn’t taking too long.
You can either increase max.poll.interval.ms or process records faster before the next poll() call.
I’m using Ubuntu with an RTX 4070, and the only setup that works for me is Python 3.11 with TensorFlow 2.15.1.
This usually happens when the default terminal profile in VS Code is set to null or restricted.
It is explained step-by-step in this short YouTube video here: https://youtu.be/Bh7w9vTVRko
Fix:
Open VS Code Settings (Ctrl + ,)
Search for terminal.integrated.defaultProfile.windows
Choose a terminal like Command Prompt or PowerShell
Restart VS Code
That should solve it!
I think that you are looking for this:
https://mlflow.org/docs/latest/api_reference/_modules/mlflow/tracking/fluent.html#log_table
I followed these instructions and was able to get the production keys for an internal/private app: https://help.railz.ai/en/articles/6395617-steps-to-complete-the-quickbooks-online-app-assessment-questionnaire
Struggled with Django’s test Client and getting POST data to work too—fun times! One trick I learned: building a Python-based AI assistant that can automate and test stuff locally actually helped me understand these client requests and workflows way better. Found a course (not mine) that walks you through building such an assistant, step-by-step, using Python and AI tools—even for beginners, it’s way less painful: https://payhip.com/b/R9Ae1.
Now my test scripts run smoother... but my AI assistant still refuses to debug my code for pizza.
Couple years later... Turns out there are now order and hue_order arguments on the barplot method.
Documentation voilà: https://seaborn.pydata.org/generated/seaborn.barplot.html
for example:
import pandas
import seaborn
values = pandas.DataFrame({'value': [0.3, 0.112, 0.561, 0.235]})
values_sorted = v['value'].sort_values(ascending=False)
seaborn.barplot(
x=values_sorted.index,
y=values_sorted,
order=values_sorted.index,
);
In dev tools navigator.maxTouchPoints is always set to 1.
Note this might trigger in older smartphone devices that doesn’t support multiple touch points.
if(navigator.maxTouchPoints == 1) {
//your code
}
In PhpStorm or WebStorm, the Emmet ! abbreviation for generating an HTML5 boilerplate works only in files recognized as HTML, so it won’t automatically expand in a .php file because the editor treats it as PHP code. To insert the boilerplate inside a PHP file, place the cursor outside any <?php ?> tags and use the Emmet “Expand Abbreviation” command manually from View → Context Actions → Emmet → Expand Abbreviation (or press Ctrl+Alt+J / Cmd+Alt+J) and type !. If that doesn’t work, you can enable Emmet for PHP under Settings → Editor → Emmet → Enable Emmet for: HTML, PHP, etc. Alternatively, create a custom Live Template: go to Settings → Editor → Live Templates, add a new template under the PHP context, set an abbreviation such as html5, and paste the standard HTML5 boilerplate code inside. Then, whenever you type html5 and press Tab in a PHP file, PhpStorm will insert the complete HTML5 structure without affecting PHP execution.
That Nucleo board is/was totally fine. That mass storage which is by default enabled is to load hex file straight to connected target without use of any program but simple file transfer or drag and drop if you will. If it's not needed you can turn it off by choosing appropriate options in upgrade firmware tool. Boot pin has nothing to do with it. I hope it helps someone.
I have found a method that allows you to add eventListeners/onExecution logic to commands.
As vs code does not allow you to alter existing commands, we have find a workaround. So this guide can be used for any command altering in vs code.
VS code allows you to register an already existing command, and will use the last registration. (This could conflict with other extensions doing the same!)
We register an existing command, and on execution we follow a `unregister - execute - register` logic. The registered command calls the original command, and so we don't fall into a recursive loop we have to follow this logic.
Extending the answer from @simon i end up tracking all the traceback returning by extract_tb().
Package:
from logging import getLogger
from os.path import basename
from traceback import extract_tb
logSys = getLogger()
def catchUnexpectedError(funct):
def wrapper():
try:
funct()
except Exception as unknown:
stack = extract_tb(unknown.__traceback__)
exc, msg = repr(unknown).rstrip(')').split('(', maxsplit=1)
logSys.critical(f'Execution finished by exception {exc}: {msg.strip(chr(34))}.')
logSys.critical(f'The following errors led to program failure:')
for n, tb in enumerate(stack):
logSys.error(f'{n}) {tb.line} (line {tb.lineno}) at {basename(tb.filename)}')
return wrapper
Module:
from asset import catchUnexpectedError
def nested():
raise ValueError("Deliberately raise")
@catchUnexpectedError
def hello():
nested()
hello()
Output:
[CRITICAL] Execution finished by exception ValueError: 'Deliberately raise'. (11:27:43 29/10/2025)
[CRITICAL] The following errors led to program exit: (11:27:43 29/10/2025)
[ERROR] 0) funct() (line 51) at _init_.py (11:27:43 29/10/2025)
[ERROR] 1) nested() (line 15) at main.py (11:27:43 29/10/2025)
[ERROR] 2) raise ValueError("Deliberately raise") (line 11) at main.py (11:27:43 29/10/2025)
React is an open-source front-end JavaScript library that is used for building user interfaces, especially for single-page applications.
It is used for handling the view layer for web and mobile apps.
React was created to solve a key problem in front-end web development — how to efficiently build and manage dynamic, interactive user interfaces as web applications grow larger and more complex.
I gave up with Conda in the end and have gone for a vanilla Python solution. Pros + cons, but it's been fine for me. Too much of my life wasted trying to fix abstract issues!
If this happened after setting targetsdk to 35 & beyond then add following line to you layout.xml :
android:fitsSystemWindows="true"
I've had the same issue. Once I went through the steps in Evan's answer it still didn't work. What fixed it for me was moving the caller repo from internal to private. This can be done under General Settings -> Danger Zone.
After the change, I pushed another commit and it triggered the workflow.
I'v faced same problem and did as you without any success.
do you find a solution for this problem ??