gsutil -m cp -R /home/$USER gs://BUCKET_NAME
use multiprocessing gsutil
header 1 | header 2 |
---|---|
Start game | moaz watching the film |
Run with police | win game |
It might be too late, but if you have already set android:inputType="textMultiLine"
and it still doesn't work, try setting android:maxLines
to some number, and it should be fixed.
Go to Source Control Settings > Views and check "Repositories" and "Changes"
Select in "Repositories" list all repos you need via "CTRL" or "SHIFT" and click
Go to Source Control Settings and hide "Repositories"
Finally you have all desired repositories in "Changes" view.
here I answer deep about in my website https://www5star.health.blog/2025/08/19/aeo-vs-seo-how-to-rank-in-googles-ai-powered-search/
You don’t have to add a network_security_config.xml unless your app needs custom rules (like allowing cleartext HTTP traffic, trusting custom CAs, or disabling certificate pinning for debug builds).
By default, on API 28+ (Android 9 and up), cleartext (HTTP) traffic is blocked unless you explicitly enable it via network_security_config. If your app only uses HTTPS and doesn’t need special exceptions, you’re fine without that file.
This usually happens if you have msys2 installation on Win7 and upgrade the packages to a version compatible only with win10+. The new OS has additional exports in its kernel and some libraries; these exports were not in place on Win7, so loader will fail.
While the question here is clearly about Cygwin rather than Msys2, similar solutions may apply - keep your runtime libraries in a version which still supports Win7, if you want to continue using that OS.
Details for Msys2 are here:
Is it possible to install MSYS2 on Windows 7?
you can store token inside redis or backend (on memory) and you just need to store session id and send backend session in frontend . so you don't need store large jwt token in cookie or other storages .
7
I was referring to the same test example at https://www.jetbrains.com/help/kotlin-multiplatform-dev/compose-test.html#62f6e1cb facing the same error. As far as I understand you can run tests for android only from terminal: "Currently, you cannot run common Compose Multiplatform tests using android (local) test configurations, so gutter icons in Android Studio, for example, won't be helpful."
Unfortunately I don't have a solution for this, but I'm experiencing the same issue right now and it appears to be a known issue with the API at this time.
https://github.com/LibraryOfCongress/api.congress.gov/issues
Install latest version of nodejs-(LTS) in your system
you might have running older version of node
Looks like this issue comes from Astro’s internal Vite. In the meantime, you can just run npm update vite
to bump the version manually.
#!/bin/sh
if [ "$var1" != "${var1%mtu *}" ]; then
echo "Matches, do something"
fi
If the pattern doesn't match its value will be equal to the variable. If there's a match (even with % or #) it will return something else... and we're good to go :)
So, getting the elephant out the way first. "Append" means "Insert at the end [of something]" and "Prepend" means "Insert at the beginning [of something]".
Now that we know the difference between "append" and "prepend". I'd like to mention that <Element>.append()
exists.
And also that the ".appendChild()" is, more specifically: <Node>.appendChild()
.
It makes more sense now to compare <Element>.append()
with <Node>.appendChild()
than <Element>.prepend()
with <Node>.appendChild()
, now that we know the difference between the words "prepend" and "append".
So, first. Note that one is a method of <Element>
whilst the other is a method of <Node>
. If you look at the linked MDN page of <Element>
you'll see that an <Element>
"is" a <Node>
(but not all nodes are elements).
If you're reading this, you probably know what an element is, but what is a node? Well, a simple way to figure out that is to look at the types of nodes that exist. You can also find that information on MDN, but for your convience, a screenshot has been provided below:
Most notably, however. Or rather, the only other node that I remember is the "text node". This is an example of a text node:
I'm sure you've seen this before, it's an example of a child node, that isn't an element.
This distinction is important, since you may be iterating on <Element>.childNodes
, and wondering why all of a sudden ".append()
is not a function" when you've obviously used in a million times. The solution there would be to use <Element>.children
or .appendChild()
.
TL;DR .append()
is better than .appendChild()
in practically every way.
For starters, you can directly append text nodes as strings:
- parent.appendChild('This is some text.'); // Error
+ parent.append('This is some text.'); // Totally fine.
You would need to create a text node:
+ parent.appendChild(document.createTextNode('This is some text.')); // Fine.
Also, .appendChild()
returns the Node. Whereas, .append()
simply returns undefined. Wait, this is actually inconvenient.
But, .appendChild()
is also just much more stricter, take a look at the list of throw cases for this exception:
You can also .append()
multiple nodes at once... whereas you can only .append[ASingleNode]Child()
at a time.
Which means this will work:
// Move all pinned tasks to the top of the to-do list.
todoListElem.prepend(...todoListElem.childNodes.values().filter((taskElem) => taskElem.classList.includes('pin'));
Well, it's quite simple. <Node>.appendChild()
is old. It comes from the original, foundational, DOM API. And that API was designed to be very precise, low-level, or let's just say, it was a lot more "computer scientist" than "web developer". After all, those the were the people who created it. The generic idea was "This method will do one thing, and do it explicitly".
That's why .appendChild()
is purposefully so strict.
.append()
, is a modern "addition" - actually, the whole "Element" API is an "addition", that was built "on top" of the "Node". So, it contains everything from the past... and more (for backwards-compatiability, ensuring all old websites still work and whatnot).
This newer API, developed by web developers, just looked at how people (and themselves ig) were using the existing methods and just made life more convenient for everyone. So, now we don't have to manually create a text node.. every. single. time. The .append()
does that for us internally.
One thing that is less convenient though, is the return value. If everything is supposed to be easier - why the hell would you return "undefined"? Method-chaining is awesome!
The reason is quite simple. Since you can append multiple elements, what should the return value be?
Okay, the natural conclusion is an array of nodes (included newly created nodes, e.g; String -> TextNode, return TextNode). But now, if there's one node, do we return just that one node and array... with just one node inside of it? There's pros and cons to both (I'd recommend the array).
Another issue is just performance, creating an array every single time you append, is too much - maybe not for your web project, but JavaScript (and the DOM API) is built for a wide variety of project types, and for some, performance is important (JS is already slow enough as it is - compared to other languages), and the DOM API is a very critical point. Simply put: undefined
is the cheapest and fastest option.
And finally, you could say it's also to encourage best practices. While method-chaining is really useful sometimes, it can lead to unreadable code. The forceful removal that, makes your code a more straight-forward list of excutable tasks. Like:
// 1. Create and collect your nodes
const div = document.createElement('div');
const span = document.createElement('span');
const nodesToAdd = [div, span];
// 2. Configure them
nodesToAdd.forEach(node => node.classList.add('new'));
// 3. Perform the DOM operation
parent.append(...nodesToAdd);
// 4. ...
Which is a lot more nicer than...
parent.append(document.createElement('div'), document.createElement('span')).forEach(node => node.classList.add('new'));
Albeit, I'd say that a competent developer would learn to not do that anyways, and the programming language shouldn't be the one to enforce best practices. However, I don't think this is actually the case, since modern JS usually does what I prefer: a slight nudge / encouragement, like how: <Document>.getElementById('myElement')
(old & strict method, heavily encourages you to use only one, unique ID in your HTML code). But modern JS gives you the option of document.querySelectorAll('#myElement');
. Although, you could also argue that that is just an "unfortunate" consequence / a "negative" side-effect of using CSS selectors.
Anyways, I went on a little tangent - hope all your questions were answered.
This is called View Ttransition.
You have a tutorial here : View Transitions but be carefull, this don't work yet on firefox.
You need a litle Javascript to make the transition between pages.
It is simple component which provides the feature.
.setBody(simple("resource:classpath:file.txt"))
File must obviously be present at route compile time, not at its execution.
I used another package which is this one : https://pub.dev/packages/angur_html_to_pdf
Thanks all guys, who answered. For me the next code works fine:
public static async Task DragAndDrop(ILocator source, ILocator target, IPage page)
{
var sourceBox = await source.BoundingBoxAsync();
var targetBox = await target.BoundingBoxAsync();
await page.Mouse.MoveAsync(sourceBox.X + sourceBox.Width / 2, sourceBox.Y + sourceBox.Height / 2);
await page.Mouse.DownAsync();
await page.Mouse.MoveAsync(targetBox.X + targetBox.Width / 2, targetBox.Y + targetBox.Height / 2, new() { Steps = 20 });
await page}
Instead of read -s -r -N 1
use read -e -s -r -N 1
call. Here only addition is the -s
switch which tells read
to use readline library for input and then only read
will be able to understand arrow keys and other complex keystrokes.
For those looking for multiple staging areas in Git, git-cl
provides this functionality.
Instead of repeatedly cycling through git add -p
, you can organise changes by intent at the file level:
git cl add bugfixes solver.py utils.py # Bug fixes
git cl add features analysis.py plotting.py # New features
git cl status # See organised changes
git cl commit bugfixes -m "Fix convergence issues"
For scenarios involving many smaller commits from a large changeset, you can organise changes as you work (git cl add hotfix equations.py
) then commit each logical group when ready. This solves the multiple staging area problem while working with Git's existing staging model
I just deleted my android folder and then created again by using "flutter create ." command and it worked for me
This page from ish should help. Remember that the popup keyboard has a carrot that is the CTRL key. So you can always press the ^ and C to stop any active ish process/program. https://ish.app/?ref=BetaPage
By combination of the solution from
https://stackoverflow.com/a/79700580/22944268
https://stackoverflow.com/a/79738017/22944268
i was able to get the answer the question "How to pass data from an MCP client to an MCP server in Java with Spring AI?"
i tried this implementation and it worked.
Thank you for everyone's contribution.
But, usestate doesn't work, it has no effect on the const. What could be the problem?
Please suggest a solution for why my gmail api shows a 500 internal server error when I push my code to production. In localhost, they show 200ok status, but when I push my code to the production branch so they show a 500 error. Can someone please help me?
from moviepy.editor import *
# Create a simple solid background (red)
bg_clip = ColorClip(size=(720, 480), color=(200, 50, 50)).set_duration(5)
# Add a simple moving rectangle (just for random fun effect)
rect = ColorClip(size=(200, 100), color=(50, 200, 50)).set_duration(5)
rect = rect.set_position(lambda t: (50 + int(t*100), 200)) # moves horizontally
# Combine background + rectangle
final_clip = CompositeVideoClip([bg_clip, rect])
# Export random video
output_path = "/mnt/data/random_demo.mp4"
final_clip.write_videofile(output_path, fps=24)
The difference comes from how Ruby parses line breaks and arguments inside parentheses
in your second case:
puts(x
-y)
Ruby doesnt see this as (x-y)
it actually parses it as (x,-y)
to get expected -1:
puts(x-y)
or
puts(x\
-y)
this isn't a bug just rubys parsing rule
val comicConStuff = context.assets //or just assets if you are in the context
.open("CSV files/ComicCon.csv")
.bufferedReader()
.use {
it.readText()
}
I found out myself, I had to comment out a line from the template:
dependencies {
testImplementation(libs.junit)
testImplementation(libs.opentest4j)
// IntelliJ Platform Gradle Plugin Dependencies Extension - read more: https://plugins.jetbrains.com/docs/intellij/tools-intellij-platform-gradle-plugin-dependencies-extension.html
intellijPlatform {
// create(providers.gradleProperty("platformType"), providers.gradleProperty("platformVersion"))
// Plugin Dependencies. Uses `platformBundledPlugins` property from the gradle.properties file for bundled IntelliJ Platform plugins.
bundledPlugins(providers.gradleProperty("platformBundledPlugins").map { it.split(',') })
// Plugin Dependencies. Uses `platformPlugins` property from the gradle.properties file for plugin from JetBrains Marketplace.
plugins(providers.gradleProperty("platformPlugins").map { it.split(',') })
// Module Dependencies. Uses `platformBundledModules` property from the gradle.properties file for bundled IntelliJ Platform modules.
bundledModules(providers.gradleProperty("platformBundledModules").map { it.split(',') })
testFramework(TestFrameworkType.Platform)
webstorm("2025.2")
}
}
Now you can directly use boxShadow property in React Native,
boxShadow: "0 4px 8px rgba(0, 0, 0, 0.1)",
I know this was posted a while ago however I created my own sitemapdotnet library here which aims to be a replacement for the System.Web sitemap in NetFW. I hope this helps!
Everything seems to be fine. And if you receive data on the terminal over UART
the transmission seems to work. But it looks that the dataword from your string is not processed correctly. If you use a function in Microchip Studio
with a pointer i prefer to work with the pointer and not switch to an other notation:
Please adapt your Terminal_SendString
-function as following:
void Terminal_SendString(const char* str)
{
while(*str != '\0')
{
USART0_Transmit(*str);
str++;
}
}
And check it out in your main:
Terminal_SendString("This is a test\n\r");
Also it is possible to send some single chars
and try if they are transmitted correctly:
USART0_Transmit('T');
USART0_Transmit('e');
USART0_Transmit('s');
USART0_Transmit('t');
USART0_Transmit('\n');
USART0_Transmit('\r');
If the single transmission of a character does not work there maybe is a missconfiguration of the baudrate within
UBRR
?!
Please provide me some feedback in the comments if anything is unclear or does not work.
As of v0.16.22 KaTex does now support \boxed{}
Use the tool in the below video :
https://www.youtube.com/watch?v=ssMsU2DFtsk
When tint is on, the text (or icon) is first turned into a mask, then filled with the tint color. The shadow is applied on this tinted version, not on your original text. That’s why it no longer looks pure black.
Opt out of tint for the text (e.g. with foregroundColor(.white)
or symbolRenderingMode(.multicolor)
) so the shadow always stays black.
I have the same issue, were you able to figure it out?
The "AAAA..." pattern indicates you're getting null bytes in your buffer. The issue is that ReadAsync(buffer)
doesn't guarantee reading the entire stream in one call.
Use CopyToAsync()
with a MemoryStream instead:
private async Task HandleFileSelected(InputFileChangeEventArgs e)
{
var file = e.File;
fileName = file.Name;
using var stream = file.OpenReadStream(maxAllowedSize: 10 * 1024 * 1024); // 10 MB limit
using var memoryStream = new MemoryStream();
await stream.CopyToAsync(memoryStream);
var bytes = memoryStream.ToArray();
base64String = Convert.ToBase64String(bytes);
}
Your application failed to establish tcp connection with your database. Is the database actually running? check if you can connect to the address and port using telnet command or database client of your choice
#include <studio.h>
main()
{
printf("Hello World");
printf("ok");
getch();
}
This is my current approach for the migration,
Hope this can be helpful for someone and feel free to give any suggestions!
hakudevtw/sample_nextjs-i18n-dual-router-migration
# Convert the final artistic logo image to JPG format
from PIL import Image
# Load the artistic PNG
png_path = '/mnt/data/painting_with_roro_rrf_logo_artistic.png'
jpg_path = '/mnt/data/painting_with_roro_rrf_logo_artistic.jpg'
# Open and convert to RGB (JPG doesn't support alpha)
img = Image.open(png_path).convert("RGB")
img.save(jpg_path, "JPEG", quality=95)
jpg_path
No don't switch database, or trust LLM suggestions for something this complex. Something is incomplete with your rails upgrade. It's hard to tell without finding out more specific details. I help companies upgrade Rails apps, would be happy to discuss what may be wrong with the upgrade with you.
As @furas commented, it is not possible to diagnose why you keep getting stuck in a loop where the user is logged into the account and it tries to redirect to /home without seeing your code sample. But I would like to share a very simplified example of how I was able to implement some security to my website so that only users that are logged in can use certain pages.
https://github.com/code50/112825123/blob/main/cs50x/flask/finance/app.py
I used the @login_required decorator provided by the Flask-Login extension. It is used to protect routes (view functions) in a Flask application, ensuring that only authenticated users can access them.
As you can see, I have passed the @login_required decorator to all the routes except login, logout and register. Every time a user want to access a protected page, they will be redirected to the login page for authentication, which is rendered by the login view function. After a successful login, only then will this person access the home page.
Hopefully, this is helpful.
I wanted to share my experience with CapriRoutes because it might help anyone looking to set up VoIP or SMS services.
CapriRoutes offers both KYC and non-KYC options, multiple DID numbers for inbound and outbound, and really flexible routing. I initially wondered if KYC verification was necessary, but in my experience, going through it adds trust and reliability, especially if you’re reselling numbers or services to your own customers.
One thing I really appreciate is their API, which lets me integrate and even resell their services directly from my own platform. This makes it easy to offer voice, SMS, and DID management without building everything from scratch.
So yes, it’s not strictly mandatory to complete all verification steps, but the benefits—transparency, reliability, and professional credibility—are definitely worth it. As someone actively using and reselling their services, I can confidently recommend them.
I can understand that the OA wants to test the elements one at a time in the array, then append new elements to the array (to also test) when particular conditions are met.
i have not verified this in js, but most languages I have used allow you to add new elements to the end of the array at any step of a for loop.
I have used the same process for linearizing data trees into arrays.
enter image description hereimport pandas as pd
import pandas_ta as ta
import math
import matplotlib.pyplot as plt
import numpy as np
# Parameters
length = 14
k = 1.0
method = 'Atr'
# Data
data = pd.read_csv('data.csv')
close = data['Close']
high = data['High']
low = data['Low']
src = close
# --- Pivot Highs / Lows ---
def find_pivot_highs(data, length):
pivot_highs = []
for i in range(length, len(data) - length):
if data[i] > max(data[i-length:i]) and data[i] > max(data[i+1:i+length+1]):
pivot_highs.append(i)
return pivot_highs
def find_pivot_lows(data, length):
pivot_lows = []
for i in range(length, len(data) - length):
if data[i] < min(data[i-length:i]) and data[i] < min(data[i+1:i+length+1]):
pivot_lows.append(i)
return pivot_lows
ph = find_pivot_highs(high, length)
pl = find_pivot_lows(low, length)
# --- Slope Calculation ---
def calculate_slope(method='Atr', length=length, k=k):
if method == 'Atr':
return ta.atr(high, low, close, length) / length * k
elif method == 'Stdev':
return ta.stdev(src, length) / length * k
else:
# Default fallback if Linreg is not defined
return pd.Series([0]*len(close), index=close.index)
slope = calculate_slope()
# --- Trendlines ---
slope_ph = [slope[i] if i in ph else 0 for i in range(len(close))]
slope_pl = [slope[i] if i in pl else 0 for i in range(len(close))]
upper = [0]*len(close)
lower = [0]*len(close)
for i in range(len(close)):
if i in ph:
upper[i] = src[i]
elif i > 0:
upper[i] = upper[i-1] - slope_ph[i]
if i in pl:
lower[i] = src[i]
elif i > 0:
lower[i] = lower[i-1] + slope_pl[i]
# --- Breakouts ---
upper_breakout = [close[i] > upper[i] for i in range(len(close))]
lower_breakout = [close[i] < lower[i] for i in range(len(close))]
# --- Trading strategy ---
trades = []
trade_type = None
entry_price = None
stop_loss = None
take_profit = None
for i in range(len(close)):
if trade_type is None:
if upper_breakout[i]:
trade_type = 'Long'
entry_price = close[i]
stop_loss = entry_price - 0.02*entry_price
take_profit = entry_price + 0.03*entry_price
elif lower_breakout[i]:
trade_type = 'Short'
entry_price = close[i]
stop_loss = entry_price + 0.02*entry_price
take_profit = entry_price - 0.03*entry_price
else:
if trade_type == 'Long' and (close[i] <= stop_loss or close[i] >= take_profit):
trades.append((entry_price, stop_loss, take_profit))
trade_type = None
elif trade_type == 'Short' and (close[i] >= stop_loss or close[i] <= take_profit):
trades.append((entry_price, stop_loss, take_profit))
trade_type = None
# --- Metrics ---
total_trades = len(trades)
positive_trades = sum(1 for t in trades if t[2] > t[0])
win_rate = positive_trades / total_trades if total_trades > 0 else 0
returns = np.array([(t[2]-t[0])/t[0] for t in trades])
cumulative_returns = returns.sum()
sharpe_ratio = (returns.mean() - 0.01) / (returns.std() + 1e-9) if len(returns)>1 else 0
sortino_ratio = (returns.mean() - 0.01) / (returns[returns<0].std() + 1e-9) if len(returns[returns<0])>0 else 0
profit_factor = sum([t[2]-t[0] for t in trades if t[2]>t[0]]) / max(abs(sum([t[2]-t[0] for t in trades if t[2]<t[0]])),1e-9)
print(f"Total Trades: {total_trades}")
print(f"Positive Trades: {positive_trades}")
print(f"Win Rate: {win_rate*100:.2f}%")
print(f"Cumulative Returns: {cumulative_returns*100:.2f}%")
print(f"Sharpe Ratio: {sharpe_ratio:.2f}")
print(f"Sortino Ratio: {sortino_ratio:.2f}")
print(f"Profit Factor: {profit_factor:.2f}")
# --- Plot ---
plt.figure(figsize=(12,6))
plt.plot(close, label='Close')
plt.plot(upper, label='Upper Trendline', color='#26a69a')
plt.plot(lower, label='Lower Trendline', color='#ef5350')
for i in range(len(close)):
if upper_breakout[i]:
plt.scatter(i, close[i], marker='^', color='r')
if lower_breakout[i]:
plt.scatter(i, close[i], marker='v', color='g')
plt.legend()
plt.show()
Have you solved this problem? I got exactly same error message of singularity warning when performing hmftest.
While Luca C.'s answer is specific on textarea element selection with :placeholder-shown
and jQuery, I want to answer the more specific question Is there a way that I can select a textarea such that $('#id_of_textarea').val() in jQuery will be ''?
combined with the following Is there an attribute for the text in a textarea?
While there is no attribute for the text in a textarea
to select, if you refer to Attribute Selectors you have no other choice but to first add your own data-* attribute to the textarea element...
...but if you instead refer to specifically style the placeholder div element and text, you can simply use the ::placeholder pseudo-element like this:
textarea::placeholder {
/* style properties */
}
thus these styling properties will apply only when the textarea has a placeholder text and has no "value" text
Receive SMS online instantly at receivesmsfree.org. Use our free temporary numbers to verify accounts on Gmail, Facebook, WhatsApp, Telegram, TikTok and more. Fast, simple, no registration required.
I want to comment in a PR and trigger a check on that PR (not on master)
I think all the other answers are missing this point. They all test things on your branch as opposed to main but they do not report these tests on the PR without creating a commit.
To report things on your branch you'll need to handle commit creation:
- name: Get PR info and set pending status
id: pr
uses: actions/github-script@v7
with:
script: |
const { data: pr } = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number
});
await github.rest.repos.createCommitStatus({
owner: context.repo.owner,
repo: context.repo.repo,
sha: pr.head.sha,
state: 'pending',
target_url: `${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`,
description: 'Running integration tests...',
context: 'Integration Tests'
});
core.setOutput('head_sha', pr.head.sha);
core.setOutput('head_ref', pr.head.ref);
and then set the final status:
- name: Set final status
if: always()
uses: actions/github-script@v7
with:
script: |
const state = '${{ job.status }}' === 'success' ? 'success' : 'failure';
await github.rest.repos.createCommitStatus({
owner: context.repo.owner,
repo: context.repo.repo,
sha: '${{ steps.pr.outputs.head_sha }}',
state: state,
target_url: `${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`,
description: `Integration tests ${state}`,
context: 'Integration Tests'
});
wrote a sample repo for this with the CI workflow:
https://github.com/luccabb/git-ci-pr-comment-automation
You can test for yourself on PR2, if you comment '/bot tests', it triggers a ci job that fails due to the changes introduced by the PR
I understand that you have done App Review, but there is a feature called Business Asset User Profile Access. This feature allows you to read information about the user.
In your Meta app developer dashboard, you should navigate to App Review > Permissions and Features, and explicitly search for the Business Asset User Profile Access feature and enable advanced access.
Fellows!
Found the solution. Just had to update my node installation to x64.
Thanks anyway to everyone!
Regarding your third question, if the disk is full requests will simply fail. There doesn't appear to be a way to solve this with the available config knobs, you have to simply enable min_free
on proxy_cache_path
and hope and pray you never get a request flood (and have enough bandwidth to the backend server) to fill your disk before the cache manager kicks in.
function isAllX(string) {
//pass the function argument to the variable str
const str = string
for(let i = 0; i < str.length; i++){
if (str[i] !== 'x' && str[i] !== 'X' ) {
return false
}
}
return true
}
I’ve faced the same issue when experimenting with secure messaging on EMV cards. From my experience, not every CLA/INS combination supports secure messaging — it usually works only with specific post-issuance commands defined in EMV Book 3. If you try to wrap arbitrary commands (like GET DATA) with CLA=8C
or 84
, most cards will simply return 6E00 (Class not supported).
In short: secure messaging needs proper TLV structure and is only valid for a limited set of commands. If you want to see an analogy, it’s a bit like how secure communication in apps (for example, telegram mod apk) only works when the app itself supports encryption — you can’t just “force” it on every action
do it after the gradle project done importing once, toggle the icon in pink and you're good to go
The INSERT...RETURNING
clause was added to MariaDB in version 10.5.0, released on December 3, 2019.
Example:
INSERT INTO mytable
(foo, bar)
VALUES
('fooA', 'barA'),
('fooB', 'barB')
RETURNING id;
``
flutter utilizza reg.exe per localizzare windows 10 sdk.
Serve che la directory contenente reg.exe sia nella variabile di ambiente PATH.
Consiglio di trovare reg.exe nei file di sistema e copiarlo c:\windows
Use span links for long running tasks.
Just in case you can't get the code working, here is a formula that will display the last row containing data in column D: =AGGREGATE(14,6,ROW(D:D)/(D:D<>""),1)
enter image description here<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.2.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/15.2.0/react-dom.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/3.5.1/vue.global.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.5.1/knockout-latest.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/7.0.1/d3.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.1.2/angular.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/3.4.4/vue.global.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.6/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.6/umd/react-dom.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/7.6.0/d3.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.7.8/angular.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.5.0/knockout-min.js"></script>
In case anyone else runs into the same issue, here is the workaround I've come up with. Like Nick mentioned, my original timestamp didn't store any time zone information, so I had to use the more general TO_TIMESTAMP()
function, perform the calculation in UTC, and then convert back to Pacific.
SELECT
TO_TIMESTAMP('2025-01-30 23:19:45.000') as ts
,CONVERT_TIMEZONE('UTC', 'America/Los_Angeles', DATEADD(DAY, 90, CONVERT_TIMEZONE('America/Los_Angeles', 'UTC', ts))) as ts_pdt
function checkURL (abc) {
var string = abc.value;
if (!~string.indexOf("http")) {
string = "http://" + string;
}
abc.value = string;
return abc
}
<form>
<input type="url" name="someUrl" onblur="checkURL(this)" />
<input type="text"/>
</form>
well its 2025, the best one out there is now https://changedetection.io/ now exist, which are also available as opensource if you want to run it yourself.
it supports email, Discord, rock-chat, NTFY, and about 90 other integrations.
It's so customisable that theres not much you cant do with it! Also check out the scheduler, conditional checks, and heaps more features, whats cool is that its python based opensource.
you can use
import type { Types } from 'mongoose';
This is completely from a Python neophyte's perspective but, based on discussions with developer regarding other IDEs, functions are and libraries are great! They provide functionality in reference call reducing the amount of time required to develop the same functionality manually. There is cost for that convenience though; you have memory overhead required for preloading libraries and other add-ons and then you have reference lag (looking up and loading the function) which you don't have with task specific code written out long hand (so to speak). With today's processing speeds and I/O capacity, many will poopoo this but in my discussions with long term coders in the MS Visual studio field, the dislike of bloated libraries and dlls, and the overhead and performance hits endemic with (dot)NET libraries are just something you have to deal with, otherwise, you have to roll your own leaner meaner utilities.
I agree that you can't test with a few records and make a broad generalization like you have, even a warm breath from the fan on a resistor could be responsible for your perceived performance inequities. Run the same test against a half a million records, then run it again after resequencing you process executions to give each process the opportunity to be first/second/third, then come back with your results.
Personally, my bias (neophyte-bias) tells me you may be right but my curiosity thinks a better test is in order.
FROM THIS NIGHT ON I'LL BE TEXT IN UPPERCASE, MY TEXTS CARRY LOTS OF MEANIN'
KolomentalSpace®
https://spaces.qualcomm.com/developer/vr-mr-sdk/ both devices use qualcomm chips but they added extra layers to prevent compatibility
You should not have automatic updates enabled. But I guess switching hosting to a quality one would resolve this.
You are probably thinking memory blocks as similar boxes kept side by side, and to look up 209th box, you may need to count the boxes as you go.
But think of it this way, suppose there are 1024 boxes, and each has a number written on the side facing you. Also, they are arranged around you in a circle in a clockwise order. Now, if you are instructed to get the value in the 209th box, what do you do? You exactly know where the 209th box is (at 209/1024*360 degrees clockwise). You turn by that exact amount, see the box, and fetch the value.
Calculating the degrees to turn is a constant time operation.
can we improve search results over time in the sense make the scoring profile dynamic in that sense from user feedback ?
Yes, in your settings change workbench.editor.navigationScope
to:
default
for the behavior you see now
editorGroup
for open tabs only
editor
for only currently selected tab
I'm in a similar situation, was this ever resolved?
This issue was resolved here: https://devzone.nordicsemi.com/f/nordic-q-a/123400/zephyr-sd-card-remount-issue-fs_unmount-vs-disk-deinitialization-leading-to-eio-or-blocked-workqueue
I was able to solve it with these steps:
1. Did not use either of the following:
disk_access_ioctl("SD", DISK_IOCTL_CTRL_INIT, NULL);
disk_access_ioctl("SD", DISK_IOCTL_CTRL_DEINIT, NULL);
Earlier I would init the disk, mount, (do stuff) and then on pin triggered removal of SD card unmount and deinit. It seems I need to remove the init/deinit them altogether or deinit right after init if I need to access any parameters using the disk_access_ioctl command.
2. Even with the above solution for some reason everything would get blocked after at unmount. This was resolved once I moved to a lower priority workqueue. I was using the system workqueue before and it would block forever.
Simply use sorted():
sorted_list = sorted(c)
@font-face {
font-family: 'Tangerine';
font-style: normal;
font-weight: normal;
src: local('Tangerine'), url('http://example.com/tangerine.ttf') format('truetype');
}
body {
font-family: 'Tangerine', serif;
font-size: 48px;
}
Credits to https://github.com/apache/airflow/discussions/26979#discussioncomment-13765204
The trick is to add environment variables with the env: attribute
env:
- name: AIRFLOW__LOGGING__REMOTE_LOGGING
value: "True"
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: "s3://<bucket-name"
- name: AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID
value: "minio"
- name: AIRFLOW_CONN_MINIO
value: |
{
"conn_type": "aws",
"login": <username>,
"password": <password>,
"extra": {
"region_name": <region>,
"endpoint_url": <endpoint_url>
}
}
The connection is still not detected in UI or CLI (in line with what @Akshay said in the comments), but logging works for sure!
first_value = df.select('ID').limit(1).collect()[0][0]
print(first_value)
Process Monitor may provide some clue as to which file ClickOnce is seeking:
https://learn.microsoft.com/en-us/sysinternals/downloads/procmon
Your Dockerfile needs to install keyrings.google-artifactregistry-auth
to authenticate to Artifact Registry. Modify your Dockerfile like this:
FROM python:3.12-slim
RUN apt-get update && apt-get install -y --no-install-recommends git openssh-client && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip install keyrings.google-artifactregistry-auth
RUN pip install --extra-index-url https://us-central1-python.pkg.dev/<projectid>/<pypiregistry>/simple/ my-backend==0.1.3 && pip install gunicorn
CMD ["gunicorn", "my_backend.app:app"]
This will then search for credentials for the pip
command to use. Make sure to set up proper authentication in GitHub Actions workflow to use the required credentials. You can refer to this documentation about configuring authentication to Artifact Registry.
TypeScript 5.6 added --noCheck
.
noCheck - Disable full type checking (only critical parse and emit errors will be reported).
This leaves tsc running as just a type stripper and a transpiler, similar to using esbuild to strip types except you get better declaration outputs (and slower transpile times).
In case you are trying to navigate between differences within a Repository Diff, for Next Difference press F7 and for Previous Difference press Shift+F7
I cannot believe how easy the solution was... and I can't believe what I had to do to figure it out. I compiled the usrsctp library in visual studio and statically linked to it with debug symbols so I could step through the code from my program. Usrsctp is incredibly complex, and I stepped through thousands of lines of code until I found the line that was sending the retransmission. Turns out it wasn't any specific retransmission code, it was just the normal send call, but it was returning an error. I looked through the documentation but I couldn't find an error code that made any sense. Then I thought about it for awhile, and realized that the error code seemed to be the same as the amount of bytes returned from the socket sendto() function. Yea, I was returning the bytes which usrsctp believed was an error code and so it kept resending the data!
I simply had to return 0 in the onSendSctpData() function and it stopped retransmitting!!
How am I able to get into my device and the WiFi /Bluetooth settings apps to be able to connect Bluetooth speakers and switch my WiFi to data when I need to use
Most likely, if you have just installed a new IDE and you are coming from VS Code with the auto-save feature enabled, you might have forgotten to save the file or missed adding the main()
function.
We can get the File root path after deployment in Azure function using object
ExecutionContext executionContext
public async Task<IActionResult> GetFiles([HttpTrigger(AuthorizationLevel.Function, nameof(HttpMethod.Get), Route = "Files/GetFilePath")] FilePathRequest request , ExecutionContext executionContext)
{
try
{
return await _bundleOrchestrator.GetFileData(request , executionContext.FunctionAppDirectory);
}
catch (F9ApiException ex)
{
return new BadRequestErrorMessageResult(ex.ExceptionMessage) { StatusCode = ex.SourceStatusCode };
}
}
public async Task<FilePathResponse> GetFileData( string rootPath)
{
try
{
// Get the current working directory
// Construct the path to the configuration folder and file
string configFolder = "Configuration"; // Adjust as needed
string configFileName = "NCPMobile_BundleConfig.json"; // Adjust as needed
string filePath = Path.Combine(rootPath, configFolder, configFileName);
// Check if the configuration file exists
if (!File.Exists(filePath))
{
throw new FileNotFoundException($"Configuration file not found at: {filePath}");
}
// Define JSON serializer settings
var jsonSettings = new Newtonsoft.Json.JsonSerializerSettings
{
MissingMemberHandling = Newtonsoft.Json.MissingMemberHandling.Ignore,
NullValueHandling = Newtonsoft.Json.NullValueHandling.Ignore,
MetadataPropertyHandling = Newtonsoft.Json.MetadataPropertyHandling.Ignore
};
// Read the JSON content asynchronously
string jsonBundlesData = await File.ReadAllTextAsync(filePath);
return jsonBundlesData; //Sample response
// Proceed with processing jsonBundlesData as needed
}
catch (Exception ex)
{
// Handle exceptions appropriately
throw new ApplicationException("Error occurred while retrieving bundle configuration.", ex);
}
}
To save the photo path in the database, after capturing the photo with MediaPicker
, use photo.FullPath
to get the local file path. Store this string in a property bound to your ViewModel (e.g., PhotoPath
). Then in your AddAsync
command, assign this path to the Photoprofile
field and save the entity using SaveChanges()
. Ensure Photoprofile
is of type string
.
The statement from author is right. The _id is not a compound index, it's a mere exact index.
The high voted answer is misleading and talking about the right things without matching the original question
_id: {
entityAId
entityBId
}
to be able to query entityAId , or query and sort on entityAid and entityBid,
you ll need to create a compound index at _id.entityAid and _id.entityBid
app.get('/{*any}', (req, res) =>
this works for me.
For me in Eclipse I had to enable it in project settings under Java Compiler -> Annotation Processing -> Enable annotation processing:
SELECT SCHEMA_NAME, CREATE_TIME
FROM information_schema.SCHEMATA
WHERE SCHEMA_NAME = 'your_database_name';
Your code is using Angular 19+ APIs, but your app is on Angular 17.
RenderMode and ServerRoute (from @angular/ssr) were introduced with Angular’s hybrid rendering / route-level render modes in v19. They do not exist in v17, so VS Code correctly reports it as no exported member.
How to fix this:
Upgrade to Angular 19+ (CLI and framework must match)
Do Verify @angular/ssr is also v19+ in package.json.
After updating, your imports are valid
If the editor still underlines types, restart the TS server in VS Code (Command Palette -> “Developer: Restart TypeScript Server”).
If you dont want to upgrade now remove those imports and use the legacy SSR pattern on v17.
While a Newton solver with solve_subsystems=False
is truly monolithic, I wouldn’t describe the solve_subsystems=True
case as hierarchical. Even though the inner subsystems are solved first, the outer Newton solver still acts on the full residual vector of its group — including both the inner subsystem residuals _and_ any coupling between inner and outer subsystems. That's why the implicit component's residual is being driven to zero at each iteration. The solve_subsystems
method helps the outer solver by solving a smaller chunk of the residual first, with some computational expense. In either case, the outer solver is always trying to solve everything below it.
Diving into the OpenMDAO internals a bit...
In OpenMDAO, everything is really implicit. You can think of explicit components are a special case of implicit components. The residual is the difference between the value that is in the output vector of that component, and the value that compute
produces based on the inputs. Now in the case of a feed-forward system, the explicit component's compute method effectively "solves itself", driving that residual to zero.
If theres a feedback into that explicit component, system's residual vector will show some nonzero residual for that components outputs. A Nonlinear Block Gauss Seidel solver can resolve this residual just by repeateldy executing the system until this residual is driven to zero (assuming that architecture works). Alternatively, the Newton solver just sees it as another residual to be solved.
Do you have an XDSM diagram of your system? That might make it easier to understand the behavior of your model.
# Project setup
mkdir my-gaming-app && cd my-gaming-app
# Frontend
npx create-react-app client
cd client
npm install tailwindcss lucide-react
npx tailwindcss init
cd ..
# Backend
mkdir server && cd server
npm init -y
npm install express cors nodemon
cd ..
I use https://onlinetools.ups.com/api/rating/v1/shop
Returns several rates at the same time.
<?php
/**
* Requires libcurl
*/
$curl = curl_init();
//Receive package info from query
$Weight = $_POST['Weight'];
$ReceiverZip = $_POST['Zip'];
//Set Receiver country
$ReceiverCountry = "US";
//Set your info
$UPSID = "YOUR UPS ACCOUNT NUMBER";
$ShipperName = "YOUR NAME";
$ShipperCity = "YOUR CITY";
$ShipperState = "YOUR STATE ABBREVIATION";
$ShipperZip = "YOUR ZIP";
$ShipperCountry = "US";
$clientId = "YOUR API CLIENT ID";
$clientSecret = "YOUR API CLIENT SECRET";
// Step 1: access token
curl_setopt_array($curl, [
CURLOPT_HTTPHEADER => [
"Content-Type: application/x-www-form-urlencoded",
"x-merchant-id: ".$UPSID,
"Authorization: Basic " . base64_encode("$clientId:$clientSecret")
],
CURLOPT_POSTFIELDS => "grant_type=client_credentials",
CURLOPT_PORT => "",
CURLOPT_URL => "https://onlinetools.ups.com/security/v1/oauth/token",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_CUSTOMREQUEST => "POST",
]);
$response0 = curl_exec($curl);
$error = curl_error($curl);
curl_close($curl);
if ($error) {
echo "cURL Error #:" . $error;
} else {
$tokenData = json_decode($response0);
$accessToken = $tokenData->access_token;
}
// Step 2: shipment data
$payload = array(
"RateRequest" => array(
"Request" => array(
"TransactionReference" => array(
"CustomerContext" => "CustomerContext"
)
),
"Shipment" => array(
"Shipper" => array(
"Name" => $ShipperName,
"ShipperNumber" => $UPSID,
"Address" => array(
"AddressLine" => array(
"ShipperAddressLine",
"ShipperAddressLine",
"ShipperAddressLine"
),
"City" => $ShipperCity,
"StateProvinceCode" => $ShipperState,
"PostalCode" => $ShipperZip,
"CountryCode" => $ShipperCountry
)
),
"ShipTo" => array(
"Name" => "ShipToName",
"Address" => array(
"AddressLine" => array(
"ShipToAddressLine",
"ShipToAddressLine",
"ShipToAddressLine"
),
"PostalCode" => $ReceiverZip,
"CountryCode" => $ReceiverCountry
)
),
"ShipFrom" => array(
"Name" => "ShipFromName",
"Address" => array(
"AddressLine" => array(
"ShipFromAddressLine",
"ShipFromAddressLine",
"ShipFromAddressLine"
),
"City" => $ShipperCity,
"StateProvinceCode" => $ShipperState,
"PostalCode" => $ShipperZip,
"CountryCode" => $ShipperCountry
)
),
"PaymentDetails" => array(
"ShipmentCharge" => array(
array(
"Type" => "01",
"BillShipper" => array(
"AccountNumber" => $UPSID
)
)
)
),
"NumOfPieces" => "1",
"Package" => array(
"PackagingType" => array(
"Code" => "02",
"Description" => "Packaging"
),
"PackageWeight" => array(
"UnitOfMeasurement" => array(
"Code" => "LBS",
"Description" => "Pounds"
),
"Weight" => $Weight
)
)
)
)
);
//Rate shop
curl_setopt_array($curl, [
CURLOPT_HTTPHEADER => [
"Authorization: Bearer " . $accessToken,
"transId: string",
"transactionSrc: testing"
],
CURLOPT_POSTFIELDS => json_encode($payload),
CURLOPT_PORT => "",
CURLOPT_URL => "https://onlinetools.ups.com/api/rating/v1/shop",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_CUSTOMREQUEST => "POST",
]);
$response = curl_exec($curl);
$error = curl_error($curl);
curl_close($curl);
if ($error) {
echo "cURL Error #:" . $error;
} else {
$decodedResponse = json_decode($response, true); // true for associative array
// Example using associative array
if (isset($decodedResponse['RateResponse']['RatedShipment'])) {
foreach ($decodedResponse['RateResponse']['RatedShipment'] as $shipment) {
$serviceCode = $shipment['Service']['Code'];
$rate = $shipment['TotalCharges']['MonetaryValue'];
switch ($serviceCode) {
case "01":
$ups_cost01 = $rate;
break;
case "02":
$ups_cost02 = $rate;
break;
case "03":
$ups_cost = $rate;
break;
case "12":
$ups_cost12 = $rate;
break;
default:
break;
}
}
}
}
?>
It would appear that this behavior is simply barred from working in captive portals as a security precaution. No files can be downloaded from a captive portal to protect the device integrity. So what I'm trying to do is impossible, as far as I can tell.
The intermittent failures are happening because of build context and file path mismatches in your monorepo. Docker only sees files inside the defined build context, and your Dockerfiles are trying to COPY
files that sometimes aren’t in the place Docker expects.
For me, it's not working for a element of a dict that type() reports as <class 'datetime.datetime'>, but it reports the both type and value as null in the difference output
I think error message about literal_eval_extended is referring to the helper.py module that is part of the deepdiff package (is "package" the right term?)
I found the source at:
https://github.com/seperman/deepdiff/blob/master/deepdiff/helper.py
But the code refers to an undefined global thingy called LITERAL_EVAL_PRE_PROCESS. I don't have the expertise to understand what this means. It's not obvious how to specify an option to fix this.
The weird thing is, the code at:
Does specify datetime.datetime as one of the things to include. Oh well.
How about using the org.springframework.boot.test.web.client.TestRestTemplate
instead of org.springframework.boot.web.server.test.client.TestRestTemplate
?
In the SpringBoot's documentation, the TestRestTemplate
is declared in the package org.springframework.boot.test.web.client
.
IMO, the convenience benefit of the builder pattern doesn't make up for the strictness you lose when instantiating the entity. Entities usually have column rules like "nullable = false" which means you are mandated to pass it when instantiating. There are other workarounds to mandate parameters in the builder pattern, but do you really want to go through all that trouble for all of your entities?