It looks like you're trying to access the config datastore.
Could you try the following endpoint instead?
GET /rests/data/opendaylight-inventory:nodes?content=nonconfig
Using content=nonconfig will retrieve data from the operational datastore.
You can find more examples in the official ODL documentation: https://docs.opendaylight.org/projects/openflowplugin/en/latest/users/operation.html
For me,
This worked
docker run --name <container_name> -p 27018:27017 <mongodb_image>
Assuming port 27018 is not running any process
When it runs, in MongoDB Compass in
Add New Connection: mongodb://localhost:27018 -> Mention the first port number you used for docker run command
Yea this is the expected behaviour. If you hide/minimise the PiP window, it turns into audio only feed and video feed stops. You can even notice the indicator on the native system toolbar turns from green (camera feed) to orange (mic feed).
Something like that:
// 1. Display multiple upload file field
add_action( 'woocommerce_after_order_notes', 'add_custom_checkout_field' );
function add_custom_checkout_field($checkout) {
echo '<div class="woocommerce-additional-fields__field-wrapper">';
woocommerce_form_field('certificate', array(
'type' => 'file',
'class' => array('form-row-wide'),
'label' => __('Files', 'woocommerce'),
'required' => false,
'multiple' => 'multiple',
'name' => 'certificate[]', // as array
'accept' => '.pdf,.doc,.docx,.rtf,.txt',
), '');
echo '</div>';
}
// 2. Save multiple uploaded files URL and name to order meta
add_action( 'woocommerce_checkout_create_order', 'save_checkout_uploaded_files', 10, 2 );
function save_checkout_uploaded_files( $order, $data ){
if( !empty($_FILES['certificate']['name'][0]) ) {
$uploaded_files = array();
foreach ($_FILES['certificate']['name'] as $key => $value) {
if ($_FILES['certificate']['error'][$key] === UPLOAD_ERR_OK) {
$file = array(
'name' => $_FILES['certificate']['name'][$key],
'type' => $_FILES['certificate']['type'][$key],
'tmp_name' => $_FILES['certificate']['tmp_name'][$key],
'error' => $_FILES['certificate']['error'][$key],
'size' => $_FILES['certificate']['size'][$key]
);
// Handle upload safely using WP functions
$upload = wp_handle_upload($file, array('test_form' => false));
if (!isset($upload['error'])) {
$uploaded_files[] = array(
'file_url' => $upload['url'],
'file_name' => $file['name']
);
}
}
}
if (!empty($uploaded_files)) {
$order->update_meta_data( '_checkout_upload', $uploaded_files );
}
}
}
// 3. Helper function to display uploaded files as links
function display_uploaded_files_list($files) {
if (!empty($files) && is_array($files)) {
echo '<p>' . __("Files Uploaded:", 'woocommerce') . '</p><ul>';
foreach ($files as $file) {
printf('<li><a href="%s" target="_blank" rel="noopener noreferrer">%s</a></li>', esc_url($file['file_url']), esc_html($file['file_name']));
}
echo '</ul>';
}
}
// 4. Display uploaded files in admin order page
add_action('woocommerce_admin_order_data_after_billing_address', 'display_uploaded_files_in_admin_orders');
function display_uploaded_files_in_admin_orders( $order ) {
$uploaded_files = $order->get_meta( '_checkout_upload' );
display_uploaded_files_list($uploaded_files);
}
// 5. Display uploaded files on thank you page
add_action('woocommerce_order_details_after_order_table', 'display_uploaded_files_in_thankyou');
function display_uploaded_files_in_thankyou( $order ) {
$uploaded_files = $order->get_meta( '_checkout_upload' );
display_uploaded_files_list($uploaded_files);
}
// 6. Display uploaded files in WooCommerce emails
add_action('woocommerce_email_customer_details', 'display_uploaded_files_in_email');
function display_uploaded_files_in_email( $order ) {
$uploaded_files = $order->get_meta( '_checkout_upload' );
display_uploaded_files_list($uploaded_files);
}
There is a menu option : Tools -> GitHub Copilot
MediaQuery(
data: MediaQuery.of(context).copyWith(textScaler: TextScaler.noScaling),
child: child!
),
Try this
It might be due to some wrong declaration issue, this is commonly faced by Top Gaming Companies in India also while doing similiar tasks.
I always prefer to use flags
$flagFirstLine = true;
foreach($doc->getElementsByTagName('a') as $a){
if(!$flagFirstLine) {
foreach($a->getElementsByTagName('img') as $img){
echo $a->getAttribute('href');
echo $img->src . '<br>';
}
}
$flagFirstLine = true;
}
Or is my whole idea of running entire tests in the EDT doomed from the start
When it comes to modal dialogs, yes.
How do I assert on Swing modal dialogs?
Upon asynchronously calling a blocking showing method (e.g. with SwingUtilities.invokeLater()), you waitForIdle() outside of the EDT and then assert.
All creations, mutations, access of Swing components, on the other hand, should be done in the EDT. See this answer for an example.
Figured it out . This permission is managed under System Settings > Privacy & Security > Local Network.

solution for this =>"Meta XR Simulator window opens and closes immediately" in this link have solution https://communityforums.atmeta.com/discussions/dev-unity/meta-xr-simulator-closes-immediately-after-launch/1330267
Great questions! You're asking all the right things as someone starting with container orchestration. Let me break this down clearly:
Minikube vs Multi-Node Clusters
You're absolutely right - Minikube is single-node only. It's designed to run Kubernetes locally on your laptop for learning/development. For your 4-node Raspberry Pi cluster, Minikube won't work.
For Raspberry Pi clusters, you have better options:
Docker Compose Multi-Host Question
Docker Compose alone cannot manage containers across different hosts. It's designed for single-host deployments. If you want to run containers on multiple Raspberry Pis with Docker Compose, you'd need separate compose files on each Pi - no automatic coordination between them.
For simple cross-host orchestration, you'd need something like Docker Swarm or K3s.
Docker Swarm vs Kubernetes
They're completely independent solutions that solve the same problem:
You pick one or the other, not both.
Kubernetes on Raspberry Pi - Resource Usage
You heard correctly! Full Kubernetes is resource-hungry on Pi. A single-node Kubernetes cluster can easily consume 1GB+ RAM just for the control plane components, leaving little for your actual applications.
This is why I strongly recommend K3s for Raspberry Pi clusters.
K3s - Perfect for Raspberry Pi
K3s is lightweight Kubernetes that's perfect for your use case:
Setup Recommendation for Your 4-Pi Cluster
curl -sfL https://get.k3s.io | sh -
curl -sfL https://get.k3s.io | K3S_URL=https://main-pi-ip:6443 K3S_TOKEN=your-token sh -
Alternative: Docker Swarm for Simplicity
If K3s feels too complex initially, Docker Swarm is simpler:
docker swarm init
docker swarm join --token :2377
Then you can deploy with docker stack deploy using compose files.
My Recommendation Path:
Quick Comparison for Pi Clusters:
| Tool | RAM Usage | Complexity | Pi Suitable |
|---|---|---|---|
| Docker Compose | Low | Very Low | Single Pi only |
| Docker Swarm | Low | Low | ✅ Great |
| K3s | Low | Medium | ✅ Excellent |
| Full Kubernetes | High | High | ❌ Too heavy |
| MicroK8s | Medium | Medium | ✅ Good |
Start with K3s - it's specifically designed for scenarios like yours and will give you the best learning experience without killing your Pi's performance.
For a detailed comparison of lightweight Kubernetes options perfect for Pi clusters, check out: https://toolstac.com/alternatives/kubernetes/lightweight-orchestration-alternatives/lightweight-alternatives
After starting expo you can press Shift + A to select an Android device or emulator to open. (Same Shift + I for ios)
To delete the broken emulator you can use the Android Studio device manager:
Great work, do you have the solution?
Use the Wikimedia REST API ...
"I want to perform report rationalization for our enterprise reporting ecosystem. The environment includes SQL Server tables, stored procedures, SSIS packages, and SSRS reports stored in our code repository. Please analyze all this source code and metadata to:
Extract metadata about data sources, transformations, and reports
Build a data lineage and dependency graph showing how data flows from SQL tables through ETL to reports
Identify reports that are similar or near-duplicate based on query logic, datasets, parameters, and output metrics
Cluster reports by similarity and highlight redundancies
Provide a summary report listing duplicate or overlapping reports with explanation of similarity criteria
Visualize key dataset reuse and report dependency chains
You can treat this as a multi-step task with iterative refinement. Use retrieval-augmented generation techniques to incorporate contextual information from the entire codebase for accurate analysis. Output the findings in a structured format suitable for consumption by business and technical stakeholders."
Here is the link to the Full Blog Post on K-Armed Bandit Problem.
https://dystillvision.com/writing/engineering/multi_k_armed_bandit_problem_in_reinforcement_learning
Python Program for the K-Armed Bandit Problem
import numpy as np
class EpsilonGreedy:
def __init__(self, k_arms, epsilon):
self.k_arms = k_arms
self.epsilon = epsilon
self.counts = np.zeros(k_arms) # Store for Number of Arm is pulled
self.values = np.zeros(k_arms) # Store for Estimated Value for each Arm
def select_arm(self):
if np.random.rand() < self.epsilon:
print("Selecting 1 random Arm between 1 and k_arms")
return np.random.randint(0, self.k_arms)
else:
max_value = np.argmax(self.values)
print("Selecting Max Value Arm", max_value)
return max_value
def update(self, chosen_arm, reward):
self.counts[chosen_arm] += 1
c = self.counts[chosen_arm]
value = self.values[chosen_arm]
updated_value = ((c-1)/c) * value + (1/c) * reward
self.values[chosen_arm] = updated_value
# print(chosen_arm, " has been selected ", n, "times")
# print("Current value for ", chosen_arm, " is", updated_value)
k_arms = 10 # Ten weapon options
epsilon = 0.1 # Random weapon for 10% of trials
n_trials = 1000
rewards = np.random.randn(k_arms, n_trials)
agent = EpsilonGreedy(k_arms, epsilon)
total_reward = 0
for t in range(n_trials):
arm = agent.select_arm()
print(arm)
reward = rewards[arm, t]
agent.update(arm, reward)
total_reward += reward
print("Total Reward ", total_reward)
<ul> tag has a default padding-left from 40px. You can override this with css.
ul {
padding-left: 0
}
https://developer.mozilla.org/en-US/docs/Learn_web_development/Core/Text_styling/Styling_lists
[columnMode]="'force'"
replace force to standard or flex
bodysuit is blacklisted resixpack
Yesterday I spent hours debugging my Flutter app. The .aab looked fine in Android Studio, but both the emulator (16kb) and Play Console showed errors. The issue is a bug in Android Studio Narwhal 2025.1.2, where AAB alignment isn’t checked properly (APKs are fine).
https://github.com/flutter/flutter/issues/173949#issuecomment-3220455340
https://issuetracker.google.com/issues/432782053?pli=1
The fix: install the RC version (2025.1.3), upgrade NDK to r28, and update Gradle. I used gradle-8.14.2-all.zip since I couldn’t find the alpha mentioned.
Hope I save you some time if you are on the same situation. 😊
This should be fixed with Version 1356.ve360da_6c523a_
because in Node.JS, timeout 0 equal to 0.05
Here's a link to UPS Knowledge Base FAQs (Page 10 - Shipping API - Package - First Question) which clearly indicates that UPS backend systems does not support double-byte (Unicode, UTF 8) characters. Only Latin characters can be entered and submitted.
You can move the update logic to a different helper class and just use @async and make it non blocking.
I know its simple and straightforward, but please let me know the issue you faced here
Just extra question, how do I know which job/program send data to dataqueue AAAA of library BBBB by using db2 query?
BR,
check your URL rewrite's in the web.config file or IIS manager (as depicted in the picture below), make sure the rules are correct, check out my problem to fixing the WebResource.axd file: https://stackoverflow.com/a/79755304/1704382
This blog post provides a clear explanation. Be sure to check it out!
https://medium.com/@amitdey9020/why-your-javascript-app-slows-down-over-time-memory-leaks-explained-1bb88eb77275
try adding !important after the display: none rule to ensure the labels stay hidden.
If you want to avoid appending, you could reduce the reversed list on itself:
l = [1,2,3,4]
lr = Enum.reverse(l)
Enum.reduce(lr, lr, fn x, acc -> [x | acc] end)
This will avoid traversing the list twice (once for reverse, another for appending with ++).
had similar issue and if this pings op, please update your secrets and passwords posted above and i'm still trying to solve the warning
WRN Error checking new version error="Get \"https://update.traefik.io/repos/traefik/traefik/releases\"
for me chilean reazoning is no any reason to make a job execute inmediatly, first question is, why inmediatly?, is the job any possibility to improve any priocess of you entire bussines pŕoccess?
by the way if you make a call to any api inmediatly use cURL.
there is many ways to get dos de píno y dos de queso by make a process inmediatly
execution
https://dev.to/webdox/corre-tus-tareas-recurrentes-con-sidekiq-3nj1
With SourceTree on MacOS: Settings -> Advanced
Firstly, you need to choose which host you want to refactor:
If you need to change both the username and password, please edit the username first. SourceTree will require the password when you interact with the Git repository.
If you need to change the password only, I think it is better to delete the credential. When you interact with the Git repository, SourceTree will require you to enter a new one.
Q :
" (...) if I run the zproxy.py and zpub.py scripts on one machine, and zsub.py on another machine (...) never prints anything."
Oh sure, it must do so -- just review the actual, imperatively commanded, connectivity setup :
Host A Host B
+-------+ +-------+
| | zproxy | |
o========.bind("tcp://*:5559") | |
+-->o========.bind("tcp://*:5560") | |
| | | | |
| | | +-->? |
| +-------+ | +-------+
| | zsub
| +---.connect("tcp://{}:5559".format( getNetworkIp() )
| goes Host B ^^^^^^^^^
| onto |||||||||
| self ------------------------------+++++++++
| zpub
+---.connect("tcp://{}:5560".format( getNetworkIp() )
Host A ^^^^^^^^^
Q :
" Can anyone tell me what I'm doing incorrectly? "
(a)
repair the adresses accordingly, so as to indeed .connect() onto an RTO .bind()-prepared AccessPoint(s)
(b)
repair the error-blind distributed-code. There are many reasons, why naiive expectations may and do fail in real-world. Always test errno-indicated results from operations
(c)
better be nice to resources. There is no control of closing/release of instantiated sockets. That is wrong. Always release/dispose off resources, the more if gonna be used infinitely times, as above in an infinite loop.
(d)
last, but not least, your code should fail even on the localhost colocated run attempts, as the XPUB/XSUB-messages are (by definition, documented in native API, which Python wrapper might have put in shade) multipart. The code as-is shall block infinitely, as even the first arriving message ( being by definition multipart ) does not fully read-out from the incoming queue. One may find, even here, on StackOverflow, remarks on "robust"-provisioning for surviving (un)known-many-parts multipart-messages for production grade apps.
Add USE_EXACT_ALARM (for exact scheduling) and FOREGROUND_SERVICE_DATA_SYNC (if applicable) in your manifest.
Switch from BackgroundJob to WorkManager – it’s the officially supported way for background sync in Android 12+.
For push notifications, ensure you’ve requested POST_NOTIFICATIONS and added a proper NotificationChannel.
On Android 14, background execution logs “not allowed” if you don’t start the task from a foreground service or scheduled WorkManager job.
Use WorkManager (with constraints if needed) instead of BackgroundJob, and if you need long-running tasks, tie them to a foreground service with the right permission in the manifest.
You can use this project to achieve fingerprint spoofing: https://github.com/gospider007/fingerproxy
Agree with @monim's explanation.
The reason this is happening is because setState is asynchronous. So clientSecret may not yet hold a value when stripe.confirmCardPayment was called. This is in line with the React docs here: https://17.reactjs.org/docs/react-component.html#setstate
React does not guarantee that the state changes are applied immediately
...
setState()does not always immediately update the component. It may batch or defer the update until later. This makes readingthis.stateright after callingsetState()a potential pitfall.
Another approach you can consider, is to use the useEffect hook to monitor changes to clientSecret. This way, once it has a value/changed it's value, you can call stripe.confirmCardPayment.
const [clientSecret, setClientSecret] = useState('');
useEffect(() => {
async function confirm() {
if (stripe && elements) {
const { error: stripeError } = await stripe.confirmCardPayment(clientSecret, {
payment_method: {
card: elements.getElement(CardElement)
}
});
if (stripeError) {
console.error('Payment failed', stripeError);
} else {
console.log('Payment successful');
}
}
}
if (clientSecret) {
confirm();
}
}, [clientSecret, setClientSecret, stripe, elements]);
const handleSubmit = async (event) => {
event.preventDefault();
await createPaymentIntent();
};
const createPaymentIntent = async () => {
var imageLength = localStorage.getItem("imageBytes").split(',').length;
await fetch(`${HostName}api/Sell/xxxxx`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify({ "numberOfImages": imageLength })
})
.then((response) => response.json())
.then((response) => {
setClientSecret(response.clientSecret);
})
.catch(error => console.warn(error));
};
You can pass parameters to iframe through the src attribute. Please refer to the following article
https://www.metabase.com/docs/latest/embedding/static-embedding-parameters
Just in case this might be helpful for others: I was having an issue with SQL Developer v24, where it wouldn't let me select a database directly.
To get around it, I typed the connection string in the Hostname field and left Port and Choose Database fields blank.
Answer: PumpFun bonding curves use massive virtual reserves (typically 30,000 SOL and 1+ billion tokens) that dominate the pricing formula until substantial real volume accumulates, making small sells return effectively zero SOL due to mathematical rounding rather than a technical bug.
Resolutions: (1) Build significant volume through multiple large purchases totaling 10-100+ SOL before attempting sells, (2) use a different DEX like Raydium for immediate buy/sell testing, (3) create tokens with lower virtual reserves if using a custom bonding curve implementation, (4) simulate realistic market conditions with multiple wallets making substantial purchases, or (5) accept that PumpFun is designed for tokens that build community volume over time rather than immediate trading functionality.
I have a question, how to launch the website with .env variables? Currently I have them under the build/web/assets folder which will be exposed to the public when I deploy the web app. So, how to deploy the flutter web safely with a .env file without leaking any secrets on a hosting platform?
The "Global solution" portion of this post from Greg Gum (user:425823) helped me with bind:after issue.
Search for ArtistScope to find a variety of copy protection solutions for all types of media and scenarios.
They provide the only solutions that can effectively copy protect web page content.
gcc/g++ is excellent choice. You may want to add the --std option to get the newer standards.
This error is almost always a KV-cache mismatch, (the Cache object introduced in recent versions). In training, you don’t need the KV cache at all.
model.config.use_cache = False
You can refer to this project's implementation: https://github.com/gospider007/fp
This example will give the answer:
https://tradingview.github.io/lightweight-charts/tutorials/how_to/series-markers
8 years on, it's about time we got an answer!
I did this using pyftdi and the .exchange method:
address = 'ftdi:///1'
spi = SpiController()
spimode =0
spi.configure(address)
slave = spi.get_port(cs=0, freq=1e6, mode=spimode)
dup_write = b'\x18\x00' #sending two bytes. This is what you are writing to your device
dup_read = slave.exchange(dup_write, duplex=True)
No, CodeBuild will make one API call to retrieve the entire
MyDatabaseSecretsecret.
You can run this command virsh vncdisplay $domainName | awk -F: '{print 590 $2}'
to get the port number in 590x format.
Just stumbled across this looking for something, else but as I do a lot of wheels for radio controlled models I create, this is how I would create a spoked design.
difference()
{
union()
{
//shaft
translate([0,0,0])cylinder(10,5,5,$fn=100,center=true);
//main blades
dia1=4;
for(blade1 = [0 : 360/10 : 360])rotate([90,0,blade1])translate([dia1,0,0])cube([30,5,1],center=true);
//blade tips
dia2=19;
for(blade1 = [6 : 360/10 : 360])rotate([90,0,blade1])translate([dia2,0,0])rotate([0,90,0])cube([5,5,1],center=true);
}
//hollow shaft
translate([0,0,0])cylinder(12,3,3,$fn=100,center=true);
}
the debugfs command from the official extX-fs support package (e2fsprogs) has an rdump command to recursively extract filesystems in userspace.
/usr/sbin/debugfs -R 'rdump / filesystem_extracted' filesystem.img
(it had this since may 2000: https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/commit/debugfs/dump.c?id=2e8d40d562ec93d68505800a46c5b9dcc229264e )
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"
This command creates a temporary network bridge so that Docker Desktop can reach the Minikube registry via 127.0.0.1:5000. Without this, Docker cannot connect to the Minikube port-forward.
When you run the first command, let your terminal opened and go start a new one and run your commands for tag and push. example:
docker tag my_first_image 127.0.0.1:5000/my_first_image
docker push 127.0.0.1:5000/my_first_image
Turns out I had an environment variable called TF_BUILD which meant my computer had been identified as running on a CI server (even though it wasn't), and CI servers don't support user interactivity.
Removing this environment variable solved the problem.
I came 7 years later, I guess you don't need my answer anymore; However I struggled with sh lack of arrays too. so I came up with some weird idea, Why won't I make a functions that will replace the arrays? I mean... I'm bad at describing in words, so let me show you, and tell me what you think.
Even if it won't be useful for you, I want to believe that someone will find it useful. After all, sh is a bit tough...
Before we start,
#! /bin/sh
All the code will run on dash (/ash/sh).
array_length () {
# The first argument of this function
# is the array itself, which is actually a string
# separated by a delimiter that we define in the
# second argument
array="${1}"
# The second field should be the separator.
# If it's not empty, set it as the local IFS.
if ! [ "${2}" = "" ]
then
local IFS=${2}
fi
# So far not bad,
# Let's count the elements in this array.
# We'll define the counter:
counter=0
# And now let's loop.
# Pay attention, do not quote the string,
# as it won't have the "array effect"
for element in ${array}
do
counter=$(( ${counter}+1 ))
done
# At the end of the function,
# Let's echo the amount.
echo "${counter}"
# Unset the variables (that's what I prefer,
# you, of course, don't have to)
unset array element counter
# And I guess we can exit normally
# unless you want to add a specific condition
# (for example, if the length is zero,
# the exit code will be bigger than 0).
# I prefer that it will be 'true' in all cases...
return 0
}
# Try it!
array_length 'Hello World!'
Now, You could use this idea for other ideas. For example, You want to find a value at a specific index:
array_value () {
# Just like before, only few things are different...
array="${1}"
# Now the second argument is the index
# to extract the value from
# (think of it as ${array[index]}
index="${2}"
# Now we loop through the array
# (I skipped the IFS thing on purpose,
# you can copy and include it if you desire)
# and once we match the index,
# we return the value
# Define a counter for each iteration
count=0
# Define the return status
status=1
for value in ${array}
do
if [ "${index}" -eq "${count}" ]
then
# It's a match
echo "#${index}: '${value}'"
# Set the return status to zero
status=0
# And stop the loop
break
fi
# Increase the counter
count=$(( ${count}+1 ))
done
# Of course you can add an return status
# I'll set it to a variable
return ${status}
}
# Try it!
world=$(array_value "Hello World!" 1)
echo ${world}
My code examples are released under GNU GPLv3 or later.
Tell me what you think about it, and if you like my ideas, feel free to use them!
If there's a better way to achieve these things with sh, Feel free to correct me! I'd love to get a feedback.
Old URL https://vstsagentpackage.azureedge.net/. was retired 1-2 month ago.
Please check this article
You need to use https://download.agent.dev.azure.com now
Public Shared Sub SortByPropertyName(propName As String)
Dim prop = GetType(Person).GetProperty(propName)
If prop Is Nothing Then Exit Sub
If OrderAscending Then
Person.Persons = Person.Persons.OrderBy(Function(x) prop.GetValue(x, Nothing)).ToList()
Else
Person.Persons = Person.Persons.OrderByDescending(Function(x) prop.GetValue(x, Nothing)).ToList()
End If
OrderAscending = Not OrderAscending
End Sub
Interesting question! I don’t think there’s a built in way to reorder automatically.
It happens that in ActiveRecord when does the decrypt it passes another key_provider, which is the deterministic key, so using this it works:
ActiveRecord::Encryption::Encryptor.new.decrypt(cipher, key_provider: ActiveRecord::Encryption::DeterministicKeyProvider.new(ActiveRecord::Encryption.config.deterministic_key))
import matplotlib.pyplot as plt
a=[[(i-j)%2 for i in range(8)] for j in range(8)]
tick_set=[0,1,2,3,4,5,6,7]
xlab=["a","b","c",'d',"e","f","g","h"]
ylab=["1","2","3","4","5","6","7","8"]
plt.imshow(a,cmap='gray',origin="lower",)
plt.xticks(ticks=tick_set,labels=xlab)
plt.yticks(ticks=tick_set,labels=ylab)
plt.show()
After lots of random trial and error, I found a solution to this problem.
TLDR: The SQL user we were using in the ADF linked service didn't have the cdc_admin user role. When we enabled it, the queries began working as expected.
More details: My best guess why this happened -- the auto-generated cdc.fn_cdc_get_net_changes_my_custom_table function code contains calls to other auto-generated CDC functions and stored procedures, including ones in master database schema. The SQL user had permissions to call the main function, but not the sub-functions in the master schema. And cdc.fn_cdc_get_net_changes_my_custom_table gave a bad response instead of failing.
What I still don't understand: I don't get why the behavior is different when the query is sent from ADF and SSMS. My SQL user can call the table function fine in SSMS. This issue only happens from ADF.
For me, debuggableVariants was commented out in android/app/build.gradle
react {
// debuggableVariants: ["my variant"]
}
As comments stated, the fix was changing the name "Content" to "Label" in the dependency property registration.
Could not figure out why VisualStudio is able to draw the preview of the XAML but it does not work running the .exe.
I have a similar problem, the store console has our app(s) flagged as not compatible with 16KB Memory page size but when I run the apk analyzer or zipalign, it doesn't show any issue.
Finally being forced to figure out what’s wrong, I discovered buried in the code was a second database context. The secondary database context was including an entity that had a relationship with the entity that was causing problems. The primary database context, had most entities mapped, including all the relationships. The second DB context, however, did not. It was missing related entities, causing the error to be generated.
I tried all suggestions, it couldn't work then i just tried with original Apple cable and boom!
<video controls autoPlay width="640">
<source src="/video.mkv" type="video/webm" />
Your browser does not support the video tag.
</video>
example in github
https://github.com/tinybug-m/simple-dimple-mkv-next-js
It looks like adding is_account_connected in combination with needs_setup did the trick for me. According to this logic here: PaymentGateway.php#L58, both would need to evaluate to true.
same bro i am also Looking for that solution there is the first solution which is native module creating then integrating within project
we need the android.view.WindowManager
For Floating
I can see reflections of fiducials inside other fiducials (e.g. green label 71 in bottom right). You might be detecting the pose of the reflection, intermittently.
android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:2174)
header 1 header 2 cell 1 cell 2 cell 3 cell 4
This is because the element you're giving the x:Name property to is not a direct child of the ContentView main child.
<ContentView>
<Grid>
<Label x:Name="ThisWorks"...
Yeah - since C++11 (I think), if a TU defers dynamic init of namespace-scope statics until first odr-use, that init is done exactly once in a thread-safe way. If you need hard guarantees, use function local static or std::call_once or make it constinit/constexpr
What if I have some protected variables, and this method is trying to assign start values to those, I am running into errors when I try this.
I removed some binaries which were being picked up by VSCode, restarted the app, and the node version was correct:
rm -rf /usr/local/bin/npm
rm -rf /usr/local/bin/node
oh fk me! OpenAPI.com has a separate billing for chatgpt vs their "API."
makes zero actual sense.
What is the difference of using their API thru a "website" or through my own code. Apparently they need 2 credit cards so you give them extra money I guess.
Also, I dont understand why addDocuments calls OpenAI in the first place when I'm trying to add docs to my database. LangChain is very unclear about the inner works of these functions.
from PIL import Image
import numpy as np
# Load the image
image = Image.open("mads2.png").convert("RGBA")
# Convert image to array
data = np.array(image)
# Define white background threshold
r, g, b, a = data[:,:,0], data[:,:,1], data[:,:,2], data[:,:,3]
white_areas = (r > 240) & (g > 240) & (b > 240)
data[white_areas] = [255, 255, 255, 0] # Set white pixels to transparent
# Create new image from modified array
transparent_image = Image.fromarray(data)
# Save the new image
transparent_image.save("connoisseur_logo_transparent.png")
When submitting your iOS app to the App Store, determining if your app is affected by Export Compliance involves checking whether it uses, accesses, or incorporates any encryption, including standard or proprietary algorithms. Apple requires developers to answer specific encryption-related questions during the submission process in App Store Connect.
Key points to consider:
Standard vs. Non-Standard Encryption:
If your app uses standard encryption algorithms such as those provided by Apple or commonly accepted international standards (IEEE, ITU, 3GPP, etc.), you typically select the option indicating use of standard encryption.
If your app uses proprietary or custom encryption algorithms not recognized internationally, you must provide export compliance documentation and may need to obtain licenses or classification codes (e.g., CCATS) from U.S. authorities.
Documentation and Registration:
Apps using non-exempt encryption must declare details about algorithms, encryption scope, and provide necessary documentation during submission.
Registration with the U.S. Bureau of Industry and Security (BIS) might be necessary.
How to Stay Compliant Efficiently:
Understanding complex export regulations and encryption standards can be challenging. For professionals seeking to quickly access, research, and clarify standards across multiple technical bodies—such as 3GPP, IEEE, and ITU—tools like StanEffect.ai are invaluable.
About StanEffect.ai:
StanEffect.ai is the world's first AI-powered unified search platform for technical standards, revolutionizing how professionals access and research standards across multiple repositories including 3GPP, IEEE, and ITU. With StanEffect.ai, you can:
Seamlessly access standard-related documents and emails with ease.
Use AI-powered insight discovery to uncover deep connections across vast technical datasets.
Streamline project development by locating relevant standards and related technical documents effortlessly.
Empower decision-making by leveraging comprehensive data from multiple standards organizations in one platform.
Save time by searching across these repositories with just one click instead of navigating multiple sources individually.
Using StanEffect.ai can help ensure your app development aligns with the latest international export and encryption standards, reducing compliance risks during iOS app submissions.
There are a few properties that you need to set in Apache Camel before you can set the JMS_IBM_MQMD_MsgId header. Since these headers are in byte[] format, JMS does not natively support values apart from primitive data types and String.
You can setup a DestinationResolver to add a few properties in the IBM MQ queue. Since these properties are not jms component properties, they have to be set via CustomDestinationResolver. Refer this link.
public DestinationResolver mqmdWriteEnabledWmqDestinationResolver() {
return new DestinationResolver() {
@Override
public Destination resolveDestinationName(
Session session, String destinationName, boolean pubSubDomain) throws JMSException {
MQSession wmqSession = (MQSession) session;
MQQueue queue = (MQQueue) wmqSession.createQueue("queue:///" + destinationName);
queue.setMQMDWriteEnabled(true);
return queue;
}
};
}
setMQMDWriteEnabled will let you write MQMD headers like JMS_IBM_MQMD_MsgId, which do not match the JMS specifications.
However, do note that by default, destinationResolvers are autowired, so you may need to create a new default destination resolver if you are using camel with spring boot.
You can then generate the JMS_IBM_MQMD_MsgId as a byte[] of the message id. Refer to docs for the MessageId standards of IBM MQ.
Once you do that, this will be the configuration of your to endpoint:
.to(
jms("queue:DEV.QUEUE.1")
.connectionFactory(connectionFactory)
.preserveMessageQos(true)
.advanced()
.destinationResolver(mqmdWriteEnabledWmqDestinationResolver)
.allowAdditionalHeaders("JMS_IBM_MQMD_.*"))
allowAdditionalHeaders will allow non JMS headers to be sent to the destination.
If both users are taking turns on the same box, you can sometimes sidestep the whole sharing business by having the non-primary users connect using named pipes:
sqlcmd -S "np:\\.\pipe\LOCALDB#somehexcode\tsql\query" -E
I had the same problem and wrote a Prometheus exporter. It track task success, failure, duration and queue length. https://github.com/danihodovic/celery-exporter
Another solution would be to use the cronet_http [1] package which will honor the installed user certificates.
In my case rather than adding the RuntimeIdentifier, the issue was that the CI/CD pipeline wasn't doing a clean before building so it had old obj folders from previous builds. Turning the clean on first in Azure DevOps was enough to solve it.
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
This library give you a guide to encrypt/decrypt data between Android and IOS.
It's well-documented and have easy sample.
This library can help you.
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
it support to encrypt/Decrypt with Symmetric Key or Asymmetric Key
For example:
String seed = "v56JBdk75^&*GU156OJ^*(x";
byte[] secretKey = CWCryptoUtils.generateSymmetricKey(seed, 16).getEncoded();
String originText = "Color the wind";
byte[] encrypted = CWCryptoUtils.encrypt(secretKey, CWStreamUtils.stringToBytes(originText));
For Android/Java, use this library
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
For example:
CWCryptoUtils.sha1("Color the wind");
CWCryptoUtils.sha256("Color the wind");
CWCryptoUtils.hash("Color the wind", "SHA-224")
I identified the cause of the Azure Synapse pipeline trigger error. After hours of troubleshooting and adjusting both SFDC and MS Azure Synapse instances, I discovered that an empty space ' ' character caused the issue at the beginning of the SFDC URL endpoint domain. I was trimming and regenerating the application client ID and secret, but I overlooked checking the URL. Once I found and removed the space from the endpoint, the Synapse Trigger ran successfully. My advice to anyone facing the same error is to make sure that the SFDC URL, Client ID, and Client Secret do not contain any extra characters.
Could you go ahead and try to activate the environment, and then use the exit status to create if necessary?
conda activate zqz
status=$?
if [ ! $status == "0" ]; then
echo "need to create env"
fi
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
Use this library. It supports many hash algorithm.
For example:
CWCryptoUtils.sha1("Color the wind");
CWCryptoUtils.sha256("Color the wind");
CWCryptoUtils.hash("Color the wind", "SHA-224")
This library can help you.
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
it support to encrypt/Decrypt with Symmetric Key or Asymmetric Key
For example:
String seed = "v56JBdk75^&*GU156OJ^*(x";
byte[] secretKey = CWCryptoUtils.generateSymmetricKey(seed, 16).getEncoded();
String originText = "Color the wind";
byte[] encrypted = CWCryptoUtils.encrypt(secretKey, CWStreamUtils.stringToBytes(originText));
9 years later in 2025
add_filter('oembed_result', function ($html, $url, $args) {
if (strstr($html, 'youtube.com/embed/')) {
$html = str_replace('?feature=oembed', '?feature=oembed&enablejsapi=1', $html);
}
return $html;
}, 10, 3);
works for just youtube videos,
you can then get the iframe like
const iframe = document.querySelectorAll("iframe");
if (iframe.contentWindow && iframe.src.startsWith("https://www.youtube.com")) {
iframe.contentWindow.postMessage('{"event":"command","func":"stopVideo","args":""}',"*" );
}
Sorry for the late reply - I'm guessing you figured it out. However, for other stumbling upon this, its almost always because VS has forgotten the start up project. Just Right Click on the game project in solution explorer and select Set as Start Up Project.
ed: Typos
Use this library for quick development.
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
This Android library supports some crypto methods and is well-documented.
For example:
String seed = "v56JBdk75^&*GU156OJ^*(x";
byte[] secretKey = CWCryptoUtils.generateSymmetricKey(seed, 16).getEncoded();
String originText = "Color the wind";
byte[] encrypted = CWCryptoUtils.encrypt(secretKey, CWStreamUtils.stringToBytes(originText));
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
This Android library supports some crypto methods and is very well-documented.
For example:
String seed = "v56JBdk75^&*GU156OJ^*(x";
byte[] secretKey = CWCryptoUtils.generateSymmetricKey(seed, 16).getEncoded();
String originText = "Color the wind";
byte[] encrypted = CWCryptoUtils.encrypt(secretKey, CWStreamUtils.stringToBytes(originText));
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
Use this library. It supports many hash algorithm.
For example:
CWCryptoUtils.sha1("Color the wind");
CWCryptoUtils.sha256("Color the wind");
CWCryptoUtils.hash("Color the wind", "SHA-224")
https://github.com/vuthaiduy1990/android-wind-library/wiki/Crypto
Use this library. It supports many hash algorithm.
For example:
CWCryptoUtils.sha1("Color the wind");
CWCryptoUtils.sha256("Color the wind");
CWCryptoUtils.hash("Color the wind", "SHA-224")
You can try using the bounded method instead of golden.
method="golden" (i think method="brent" too) take a bracket as a starting guess and may sample outside it.
method="bounded" enforces bounds strictly causing all evaluations and the solution to stay in [a, b]
Include<stdio.h>
Int mean()
{
Int a=5; b=6;
Printf(enter the value of a);
Scan(%d