The documentation says this:
Bind a named statement parameter for ":x" placeholder resolution, with each "x" name matching a ":x" placeholder in the SQL statement.
Although you could infer otherwise, testing suggests that it indeed binds multiple placeholders that share a name.
The query in the sample situation would end up like this:
SELECT * FROM table WHERE colA = 'bar' OR colB = 'bar'
Option 1:
Run the command prompt as an administrator, than
php artisan storage:link
Option 2: Run the command prompt as an administrator, than
mklink /D "C:\path\to\your\project\public\storage" "C:\path\to\your\project\storage\app\public"
The one-liner by @user7343148 worked really nicely from the command-line, but I had some trouble figuring out a way to make an alias for it and add it to zshrc. So, putting it here just in case someone needs it.
mp3len() {
mp3info -p '%S\n' *.mp3 | awk '{s+=$1} END {printf "%d:%02d:%02d\n", s/3600, (s%3600)/60, s%60}'
}
Try pressing Ctrl+Shift+E - it should restore the Explorer window. Then you can drag it back to the Activity Bar.
same issue, can u help me if u found solution, plz.
Exporting the CSV file using the encoding format UTF-8 resolved the issue for me.
The main problem is that admin_finish, the only route you have defined, returns JSON data directly. Laravel properly runs the index method and provides the raw JSON response when you visit that URL in your browser. The Blade file containing your HTML table and the JavaScript required to populate it is never loaded by your browser.
For two distinct jobs, you require two different routes:
One way to show the HTML page.
Your JavaScript needs to call one route (an API endpoint) in order to retrieve the data.
Perfect! Let's move forward with all of it. Here's what I’ll provide now based on your direction:
MainActivity.java
Insert this at the end of onCreate()
:
songListView.setOnItemClickListener((parent, view, position, id) -> {
String song = (String) parent.getItemAtPosition(position);
Intent intent = new Intent(MainActivity.this, PlaySongActivity.class);
intent.putExtra("songTitle", song);
startActivity(intent);
});
PlaySongActivity.java
package com.example.rpsong;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.view.View;
import android.widget.*;
import androidx.appcompat.app.AppCompatActivity;
public class PlaySongActivity extends AppCompatActivity {
TextView songTitleText;
Button btnPlayPause;
MediaPlayer mediaPlayer;
boolean isPlaying = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_play_song);
songTitleText = findViewById(R.id.songTitleText);
btnPlayPause = findViewById(R.id.btnPlayPause);
String songTitle = getIntent().getStringExtra("songTitle");
songTitleText.setText(songTitle);
int resId = getSongResourceId(songTitle); // Match song title to R.raw.<file>
if (resId != 0) {
mediaPlayer = MediaPlayer.create(this, resId);
} else {
Toast.makeText(this, "Audio file not found", Toast.LENGTH_SHORT).show();
}
btnPlayPause.setOnClickListener(v -> {
if (mediaPlayer == null) return;
if (isPlaying) {
mediaPlayer.pause();
btnPlayPause.setText("Play");
} else {
mediaPlayer.start();
btnPlayPause.setText("Pause");
}
isPlaying = !isPlaying;
});
}
private int getSongResourceId(String songTitle) {
songTitle = songTitle.toLowerCase().replace(" ", "_"); // "Tum Hi Ho" → "tum_hi_ho"
return getResources().getIdentifier(songTitle, "raw", getPackageName());
}
@Override
protected void onDestroy() {
if (mediaPlayer != null) {
mediaPlayer.release();
}
super.onDestroy();
}
}
activity_play_song.xml
Create this file in res/layout/activity_play_song.xml
:
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:padding="24dp"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center">
<TextView
android:id="@+id/songTitleText"
android:text="Now Playing"
android:textSize="22sp"
android:layout_marginBottom="24dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<Button
android:id="@+id/btnPlayPause"
android:text="Play"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
</LinearLayout>
/res/raw
Place your MP3 files in app/src/main/res/raw/
and name them like this:
Song TitleFile Name (in raw/)Tum Hi Hotum_hi_ho.mp3Kesariyakesariya.mp3Perfectperfect.mp3Shape of Youshape_of_you.mp3
File names must be lowercase and underscored (no spaces/symbols).
[x] Spinner works with SharedPreferences ✅
[x] Song list updates by country ✅
[x] Tapping a song opens PlaySongActivity
✅
[x] Playback with MediaPlayer
✅
If you want the full Android Studio project with this code, I can generate it as a .zip
– just confirm and I’ll prepare it.
Let me know if you'd like to:
Add a SeekBar 🎚️
Add Premium 🔒 icons for special songs
Add user profile photo/settings logic
Ready when you are!
MAIL_MAILER=smtp
MAIL_HOST=mailhog
MAIL_PORT=1025
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS="[email protected]"
MAIL_FROM_NAME="${APP_NAME}"
We change Update things like SMTP, Email , Password , port on .env File , We can Find on Root.
def _generate_random_date(since: datetime, until: datetime):
delta = until - since
random_seconds = random.uniform(0, delta.total_seconds())
return since + timedelta(seconds=random_seconds)
When using git shortlog -sn
, without the email option, it will count all commits by author name. But when add -e
option, it distinguishes first by name and then by email, creating two or more entries in the log if the same author published commits using different emails.
Best answer... Call to your ISP or where you dns records are and request to change your PRT record in the DNS for your public ip address. You have to add a name, for example: mail.mydomainname.com
After that, add the same name of the ptr record for your domain name inside the section of domain setup in Mdaemon, host name section, SMTP Server name.
Wait for one day for DNS replication. Done.
Pablo Solares
just to let you all know, the solution was to clean the cookies related to the Github, to be able to use other options of authentication.
I have similar issue.
ILNodeControlStop("Firewall");
And feedback:
[*] [System] ILNodeControlStop: Node Firewall does not exist.
I would suggest adding your own column to the dataset through a report extension, and replacing the Document Terms on the report layout (RDL file in Report Builder).
Yun Zhu has a blog that might be a good starting place for you as well. https://yzhums.com/1958/
If you need to use nativewind for the expo camera, I could only make it work like this:
import { CameraType, CameraView, useCameraPermissions } from 'expo-camera';
import { cssInterop } from "nativewind";
cssInterop(CameraView, { className: "style" });
export default function Camera() {
...
}
anything with sys-* are google app scripts in GCP.
If your config tailwind.config.js is normal you can try:
1. Delete folder .next
2. npm run dev
You can do it here: tinyurl.com/imagexor
Take a look at qlmodel.tiangolo.com/
from PIL import Image
# Open the extracted frame
img = Image.open(frame_image_path)
# Format for iPhone wallpaper: 1170x2532 (portrait)
iphone_size = (1170, 2532)
formatted_img = img.copy()
formatted_img = formatted_img.resize(iphone_size, Image.LANCZOS)
# Save as a new jpg
iphone_img_path = "/mnt/data/frame_com_rosto_iphone.jpg"
formatted_img.save(iphone_img_path, "JPEG")
iphone_img_path
Mighty Hackar Recovery's voice is confident, expert, and reassuring, blending professionalism with relatable client experiences to inspire trust and hope.
This took me a while to find.
My Amplify build/deployments were failing with the error message "Unable to assume specified IAM Role". The issue was the AWS Amplify Github App lost access to my Amplify Project's GitHub Repository.
Fix: In your GitHub Org, go to Settings > GitHub Apps > AWS Amplify and choose Configure. Review the settings in the section Repository Access. In my case, I had to select the GitHub repository.
Today when I try to resize the window, I find out that it can't be resized freely like other apps (reduce width, height) and I can only make it smaller but with the same ratio. Turn out it is the UIRequiresFullScreen key (second one from my Info.plist) that keeps the window to be fully displayed or at least kept the default ratio. Removing or changing it to NO/false solves the issue.
Nesting should work. Please post code as text and not as images. What did your nest version look like?
It should look similar to:
Sort(
Filter('Positions', etc...),
Title
)
Turns out when I added the NumberRandomizer
class as an autoload in the godot project settings, I added the .cs file and not the .tscn file. Switching the autoload to the .tscn file fixed the issues for me.
Answer is this instruction:
this.world.getDispatchInfo().m_allowedCcdPenetration = 0.0001;
For question 1, I confirm that as at 2025 (Windows 10 and 11) the Registry continues to hold the list of time zone IDs in HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Time Zones
These time zone IDs are unique strings to identify the time zone and are not for display to the user. The strings to display to the user can be found in the "Display" subkey (these are the strings that appear to the user when changing the time zone in Windows settings).
For example for Australia there are six such IDs:
ID: “W. Australia Standard Time”, display string “(UTC+08:00) Perth"
ID: “Aus Central W. Standard Time”, display string “(UTC+08:45) Eucla”
ID: “AUS Central Standard Time”, display string: “(UTC+09:30) Darwin”
ID: "Cen. Australia Standard Time”, display string: “(UTC+09:30) Adelaide”
ID: “AUS Eastern Standard Time”, display string: “(UTC+10:00) Canberra, Melbourne, Sydney”
ID: “E. Australia Standard Time”, display string “(UTC+10:00) Brisbane”
It can be seen that a couple of time zones in the above list have the same standard time UTC bias but have different IDs. This is because of differences in daylight saving. For example Brisbane does not have daylight saving whereas Canberra, Melbourne and Sydney do. They need separate time zone keys because the daylight saving information is kept in those keys.
Some 13 years ago the questioner reported that these strings were in the Registry and since this is documented by Microsoft both in the TIME_ZONE_INFORMATION structure information, and now also in the TimeZoneInfo.FindSystemTimeZoneById(String) Method information, I think it can be relied on for the future.
So these display strings can be extracted directly using the Registry API.
An alternative way is to use the FindSystemTimeZoneById method in the TimeZoneInfo interface and then read the DisplayName Property. The documentation states that on Windows systems this simply reads the Registry entries in the same way.
For question 2, yes the currently selected time zone is given in the TIME_ZONE_INFORMATION structure by a call to the GetTimeZoneInformation API. But this does not give you the display string. Instead, the "StandardName" as reported at +4h in that structure is the time zone ID. As mentioned in the answer to 1, the corresponding display string can be found in the Registry or by using the TimeZoneInfo interface.
Although current Windows documentation for TIME_ZONE_INFORMATION gives an example for StandardName that "EST could indicate Eastern Standard Time", this documentation dates back to 2022. I think the IDs have changed since then (naturally they will be updated from time to time). Currently "Eastern Standard Time" and not "EST" is the ID in the Registry for that time zone. The corresponding Display subkey holds "(UTC-05:00) Eastern Time (US and Canada)" for that time zone.
For question 3, in my case I only needed to know whether daylight saving was in operation for the currently selected time zone. This is returned in the eax register by the GetTimeZoneInformation API (a value of 2 showing that daylight saving is currently operating).
For other (not currently selected) time zones in 2025 it seems that various methods are available.
One is direct reading of the Registry as mentioned by Jesse.
Another is to enumerate the time zones by calling EnumDynamicTimeZoneInformation. That will give you the index of the time zone you want to look at. You can pass that index to GetTimeZoneInformationForYear. According to the documentation for the DYNAMIC_TIME_ZONE_INFORMATION structure, that reads the same Registry entries.
Now there are also methods in the TimeZoneInfo interface which can be used, like GetUtcOffset (you give the date and time zone ID and the bias is calculated) or IsDaylightSavingTime (you give the date and time zone ID and the function reports whether the date falls within daylight saving time for that time zone).
As @robertklep pointed out, my axios version was old. Updating to the latest (currently ^1.11.0
) solved my problem.
Here's what works for me:
Right click an image resource in solution explorer > Change 'Build Action' and 'Copy to Output Directory' settings > Close.
The errors disappear and I simply revert my settings back. Everything continues to work fine.
<!-- index.html --><!DOCTYPE html><html lang="fa">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>چت خصوصی دخترونه 💖</title>
<link href="https://fonts.googleapis.com/css2?family=Vazirmatn&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Vazirmatn', sans-serif;
background: linear-gradient(to right, #ff9a9e, #fad0c4);
margin: 0;
padding: 0;
display: flex;
align-items: center;
justify-content: center;
height: 100vh;
}
.chat-container {
width: 100%;
max-width: 400px;
background: #fff0f5;
border-radius: 20px;
box-shadow: 0 0 20px rgba(255, 105, 180, 0.4);
padding: 20px;
display: flex;
flex-direction: column;
}
.messages {
flex-grow: 1;
overflow-y: auto;
margin-bottom: 10px;
padding: 10px;
border: 2px dashed #ff69b4;
border-radius: 10px;
background-color: #fff;
}
.input-container {
display: flex;
gap: 10px;
}
input\[type="text"\] {
flex-grow: 1;
padding: 10px;
border: 1px solid #ff69b4;
border-radius: 10px;
font-size: 1em;
}
button {
padding: 10px 15px;
background-color: #ff69b4;
color: white;
border: none;
border-radius: 10px;
cursor: pointer;
}
</style>
</head>
<body>
<div class="chat-container">
\<div class="messages" id="messages"\>\</div\>
\<div class="input-container"\>
\<input type="text" id="messageInput" placeholder="پیام بنویس..."\>
\<button onclick="sendMessage()"\>ارسال\</button\>
\</div\>
</div> <script src="https://cdn.socket.io/4.5.0/socket.io.min.js"></script> <script>
const socket = io();
const messages = document.getElementById('messages');
const input = document.getElementById('messageInput');
function sendMessage() {
const msg = input.value;
if (msg.trim() !== '') {
socket.emit('chat message', msg);
input.value = '';
}
}
socket.on('chat message', function(msg) {
const div = document.createElement('div');
div.textContent = msg;
messages.appendChild(div);
messages.scrollTop = messages.scrollHeight;
});
input.addEventListener('keypress', function(e) {
if (e.key === 'Enter') sendMessage();
});
</script></body>
</html>
I never ordered this and I don’t even know what it is. You are billing me $9.99 + tax every week. You charge through my Apple Account and I would like a refund since you started billing me.
Karen A Walen
41&9-376-7758
You can call and get my information or you can credit my Apple Account. Thank you,
If you're sending POST
requests to routes other than /api
then you'll need to add those to the $except
array in VerifyCsrfToken
, and add ->middleware(['auth:sanctum'])
to those routes in web.php
routes file.
Doesn't it because you don't use return operator in last line of my_sub_routine description? You return None from test_read into my_sub_routine , but don't return it's result from my_sub_routine. If I understand, your last line shoud be
return test_read(p)
# not simple test_read(p)
It was a tiny miss in partition count.
Topic A had actually 10 partitions and I was repartitioning the rekeyed topic B to 40 partitions by mistake (as I thought topic A had 40 partitions)!
Changing the partitions count to 10 in repartition operation fixed the issue and it worked as expected.
Sorry for the miss and stupid question.
In the end (TYVM to Google support for suggesting this) an export GOOGLE_CLOUD_QUOTA_PROJECT=<project_ID>
was the ticket to get the correct project to be used.
You use method .copy(), but string type objects has no such method (unlike lists or some others for example). You can simply write: metadata = raw_metadata because strings are immutable. Maybe, you wait raw_metadata got other type but string - it means you are wrong about the type you get by the lines raw_metadata = doc.get('metadata', {}) or raw_metadata = doc[1] if len(doc) > 1 else {} . Also if you use metadata = {} and the next reinit it by metadata = raw_metadata.copy() it will get a type of the last initiation. You can always check all types of your variables using print(type(your_variable) or use this check in code like if type(your_variable) == ...
Use Office.FileDialog component.
Based upon information from https://github.com/dotnet/runtime/issues/51252 and https://github.com/dotnet/designs/blob/main/accepted/2021/before_bundle_build_hook.md, using the newly proposed PrepareForBundle
target, I have added the following to my .csproj
file:
<PropertyGroup>
<!-- For all build agents thus far in Azure DevOps, that is, Windows 2019, Windows 2022, Windows 2025, this has been sufficient.
Instead of trying to dynamically construct something based on the Windows SDK version, which constantly changes for each build
agent, we will just use this hard coded value. Note, this is a 32-bit executable. But for our purposes, it has been fine. -->
<SignToolPath>C:\Program Files (x86)\Microsoft SDKs\ClickOnce\SignTool\signtool.exe</SignToolPath>
</PropertyGroup>
<Target Name="SignBundledFiles" BeforeTargets="GenerateSingleFileBundle" DependsOnTargets="PrepareForBundle">
<!-- Use String.Copy as a hack to then be able to use the .Compare() method. See https://stackoverflow.com/a/23626481/8169136.
All of the Microsoft assemblies are already signed. Exclude others as needed.
This is using a self-signed code signing certificate for demonstration purposes, so this exact SignTool command won't
work on your machine. Use your own certificate and replace the "code sign test" with your certificate's subject name. -->
<Exec Condition="$([System.IO.Path]::GetFileName('%(FilesToBundle.Identity)').EndsWith('.dll'))
And !$([System.String]::Copy('%(FilesToBundle.Identity)').Contains('packages\microsoft.'))
And !$([System.String]::Copy('%(FilesToBundle.Identity)').Contains('packages\system.'))"
Command=""$(SignToolPath)" sign /v /fd SHA256 /tr http://ts.ssl.com /td sha256 /n "code sign test" "%(FilesToBundle.Identity)"" />
</Target>
<Target Name="SignSelfContainedSingleFile" AfterTargets="GenerateSingleFileBundle" DependsOnTargets="SignBundledFiles">
<!-- Finally, sign the resulting self contained single file executable. -->
<Exec Command=""C:\Program Files (x86)\Microsoft SDKs\ClickOnce\SignTool\signtool.exe" sign /v /fd SHA256 /n "code sign test" "$(PublishDir)$(AppHostFile)"" />
</Target>
You can read more and see the result from this blog post:
What approach did you go with at the end?
I am asking myself this question for new navigation 3 lib...
AppNavigator
in app module seems to be a must, but I think it is overkill to have FeatureXNavigator
for every module.
I am leaning towards injecting AppNavigator
to every composable (Screen) same like for GlobalViewModel
for example.
The other thing I would like to do is to have standalone navigation module with AppNavigatorInterface
, which app module will implement. The point being to easy swap out nav3 with whatever come next.
I think your problem is using an emptyDir
volume for sharing between tasks. The tasks themselves are different pods which might not even run on the same node, and not different containers sharing the same pod.
See GH issue on Argo Workflow project: https://github.com/argoproj/argo-workflows/issues/3533
Can't you use a persistent volume instead? Check the documentation for clear examples: https://argo-workflows.readthedocs.io/en/latest/walk-through/volumes/
If not, then try with an emptyDir
and node affinity to make sure the tasks are on the same node, as suggested in the linked GH issue
I encountered the same error. For me, it was because I was on an older version of React Native, and didn't have the New Arch enabled. Upgrading to the latest version and enabling the New Architecture resolved the issue for me.
Use actix_web::rt::spawn()
, which does not have a Send
requirement, and runs the future on the current thread:
https://docs.rs/actix-web/latest/actix_web/rt/fn.spawn.html
Any other Send
futures or tasks can be spawned into different threads, and any other non-Send (!Send
) futures can be spawned on the same thread, they will cooperate to share execution time.
If you need a dedicated thread for a !Send
future, you can create it manually using std::thread::Builder
, then use Handle::block_on()
to call actix_web::rt::spawn()
to run the future locally on that thread.
Here is a similar answer that covers most of that:
Guillaume's answer was so close that I was able to fill in the missing pieces. In case anyone finds this later, summary changes:
The rolehierarchy view was the key and great to show the breadcrumbs as a plus. I can see that being used elsewhere. But I needed the toplevel bit value so I added that to the view.
I split the roles and groups into different columns in the rolehierarchy view. No big difference to the solution, but it's easier for us to have those split out.
The main query then needed roles/groups split and the toplevel in the searches.
Changed the GROUP BY to include the toplevel. Since it was a bit value, used ISNULL(MAX(CAST(toplevel AS INT)),0) AS toplevel
to determine if a toplevel role was in the hierarchy somewhere.
I added a lot more mess to the sample data to verify. Toplevel Role A now gives 5 levels deep of sub-roles, and non-toplevel Role C also gives many subroles and groups.
I have it very nearly complete in Updated DB<>Fiddle.
In the last final result, I have Alice's full access and whether it is direct or under a toplevel. But I can't have a HAVING clause to filter only those toplevel = 0. Does anyone know how to do that?
Thank you all.
Well, first, these two sets aren't identical in ordering. At a glance, they flip the ordering of 'n'
and 'f'
.
Beyond that, while set ordering isn't guaranteed in standard set
s in Python as a language, individual implementations may implement some ordering type. Whether that's a reliable contract will ultimately be a function of how much you trust that specific implementation and their promise to offer that as a stable behaviour.
Based on CPython's set
, (of which the meat and potatoes of the insertion implementation lives here), it looks like there's no particular care taken to preserve any specific ordering, nor is there any specific care taken to randomize the order beyond using object hashes, which are stable for any object's lifetime and tend to be stable globally for certain special values (like integers below 256, and individual bytes from the ASCII range in string data).
The same can be said for the implementation of set
's __repr__
, (here), which makes no special effort to randomize or stabilize the order in which items are presented.
Emphatically, though, these are implementation details of CPython. You shouldn't rely on this unless you positively have to, and even then, I'd step back and reevaluate why you're in that position.
npm run watch
It will rebuild on any saved change.
By adding a delay to the trigger (below code in the form attributes), everything worked properly with the handler being called and preventing the default behavior.
hx-trigger="submit delay:1ms"
The TOKEN_EXPIRED error after a day suggests that the Firebase refresh token, which is stored in localStorage on the web via browserLocalPersistence, is being lost or invalidated.
Your firebase-config.ts looks correct for setting persistence so the most probable cause is your browser's settings or an extension is clearing localStorage or site data after a period.
Start by checking your browser's privacy settings and extensions. If you can replicate the issue consistently across different browsers (or after confirming localStorage is not being cleared), then you'd need to dig deeper into the Firebase SDK's interaction with your specific environment.
There is a work-around to access the underlying XGB Booster:
booster = model.get_booster()
dtest = xgb.DMatrix(X_test)
y_shap = booster.predict(dtest, pred_contribs=True)
for (int i = 0; i <= 8; ++i) {
System.out.println(Math.min(i, 8 - i));
}
Temp mail boomlify is the best temp mail.
this is more more better then a traditional temp mail cause Boomlify is a privacy-focused temporary email platform that offers instant inbox creation, long-lasting emails, a centralized dashboard, custom domain and API support, smart inbox view, cross-device sync, multi-language UI, spam protection, live updates, and developer-friendly features like webhooks and REST APIs—all without registration.
Thanks to @mkrieger1 for this one, some images I used literally have over 100000 colors.... Something I NEVER expected to happen, so .getcolors()
returned None
. I changed the value to 100 million so I hopefully never face this problem ever again.
all_colors = main_img.getcolors(maxcolors=100000000)
Simply add
|> opt_css(css = "
.gt_column_spanner {
border-bottom-style: solid !important;
border-bottom-width: 3px !important;
}")
Yes you can wild card the path of paths of CSV files. Assuming you are sourcing them from GCS your create BQ table query would be:
CREATE OR REPLACE EXTERNAL TABLE `project.dataset.table`
OPTIONS (
format = 'PARQUET',
uris = ['gs://gcs_bucket_name/folder-structure/*.parquet']
);
Very late to the party on this one, but this thread is the top google result for 'javascript identity function' so I figured I'd chime in. I'm newish to Javascript, so hopefully I'm not simply unaware of a better solution.
I find this code useful:
function identity(a) { return a }
function makeCompare(key = identity, reverse = false) {
function compareVals(a, b) {
const keyA = key(a);
const keyB = key(b);
let result = keyA < keyB ? -1 : keyA > keyB ? 1 : 0;
if ( reverse ) { result *= -1 }
return result;
}
return compareVals;
}
I can then sort arbitrary data structures in a tidy way:
const arrB = [ {name : "bob", age: 9}, {name : "alice", age: 7} ];
console.log(arrB.sort( makeCompare( val => { return val.age } )));
console.log(arrB.sort( makeCompare( val => { return val.age }, true)));
// output:
// Array [Object { name: "alice", age: 7 }, Object { name: "bob", age: 9 }]
// Array [Object { name: "bob", age: 9 }, Object { name: "alice", age: 7 }]
Note that this is dependent on having an 'identity' function to use as the default 'key' function.
I think that pd.cut().value_counts()
is what you're looking for.
import pandas as pd
import plotly.express as px
# Example data
data = {
"data-a": [10, 15, 10, 20, 25, 30, 15, 10, 20, 25],
"data-b": [12, 18, 14, 22, 28, 35, 17, 13, 21, 27]
}
df = pd.DataFrame(data)
# Define bins
bin_range = range(9, 40, 5)
# Bin data
binned_data_a = pd.cut(df["data-a"], bins=bin_range).value_counts()
binned_data_b = pd.cut(df["data-b"], bins=bin_range).value_counts()
diff = binned_data_a - binned_data_b
# Plot
px.bar(
x = bin_range[:-1],
y = diff.values,
labels={"x": "Bin start value", "y": "Difference (a - b)"}
)
Thanks to @Echedey Luis for suggesting .value_counts()
. Also see docs for .cut()
and .value_counts()
.
The right way to do this is to open the Adaptive Card as a formula and the values like Topic.title, This will ensure that the data adaptive card is able to read the data properly.
You'll also get that response if the
sudo wg-quick down wg0
command is issued after wg is down. In that case, just run :
sudo wg-quick up wgo
Ran into a similar issue with extracting files from an iOS/iPadOS app when trying to export the .realm data from in Realm Studio to a .csv file...
Here to add that as of July 2025 using Realm Browser (an app that is no longer updated) works just as Apta says (on an Intel Mac running Sequoia 15.5).
I opened the default.realm file I was working with in Realm Browser, and was asked for a valid encryption key to open the file. Instead, I opened up a file that Realm had created in the same folder called "default.v7.backup.realm", which worked just fine. From there, it was easy to export the .csv file(s) for the class(es) of interest.
Thanks for the assist, Apta!!!
This is a well-known issue.
When you’re on a Zoom call (or any other voice call app), the system automatically switches your device’s audio into communication mode which is optimized for voice, not for high-quality stereo sound.
Effects:
• Stereo gets downmixed to mono
• High/low frequencies are cut off
• Music, binaural, or special effects often get suppressed
On web there’s no way to bypass this because the browser doesn’t have access to low level audio routing. On native apps you should have more control.
It turns out that the error is as a result of lack of support for Secure Boot - so if you stop the VM - then go into settings / security and disable Secure Boot then you will be able to start the VM and complete the installation process. You can then investigate the process of enabling Secure Boot on Ubuntu - see https://wiki.ubuntu.com/UEFI/SecureBoot for more information.
when I got this error, I culd not execute npm cache clean because on evey npm execution I received the isexe error, so, what I did was uninstall nodejs and remove the /usr/lib/node_modules folder, then reinstall npm and it worked
I needed to enable the users to send some Ethereum from their metamask wallet to the smart contract which they want to buy some tokens of via frontend. Based on metamask docs this is how one can call the send function of metamask in frontend:
window.ethereum.request({
method: "eth_sendTransaction",
params: [
{
from: metamaskWalletAddress, // The user's active address.
to: tokenAddress, // Address of the recipient.
value: userDesiredAmount,
gasLimit: "0x5028", // Customizable by the user during MetaMask confirmation.
maxPriorityFeePerGas: "0x3b9aca00", // Customizable by the user during MetaMask confirmation.
maxFeePerGas: "0x2540be400", // Customizable by the user during MetaMask confirmation.
}],
})
.then((txHash: any) => console.log("txHash: ", txHash))
.catch((error: any) => console.error("errorL ", error));
However, as @petr-hejda said, the token contract needs to have receive()
and fallback()
functions as well to be able to get the Ethereum.
Firstly Remove image background to Transparent Background https://www.remove.bg/
Then go to this Website to Generate @mipmap https://www.appicon.co/ download it
then replace your old files with download files
class A:
def __init__(self, x):
print("Calling __init__")
self.x = x
def mynew(cls, *args, **kwargs):
print("Calling mynew")
return object.__new__(cls)
A.__new__ = mynew
A(10)
A.__new__ = lambda cls, x: object.__new__(cls)
a = A(10)
print(a.x)
When you declare an array variable of a type it will only handle that type specified, it will just do an implicit type convertion, if you want to enforce the type then you must do it on the place you are using the array or you may insert using the index, since that way it is checked at save/compile time.
Local Array of String &myArray = CreateArrayRept("", 0);
&myArray.push(1); // This compiles (add member)
&myArray[2] = "1"; // This compiles (add member)
&myArray[3] = "1"; // This compiles (add member)
&myArray[4] = 1; // This doesn't
I was not able to pull this strictly using google app script on account of being a novice, but did have a workaround using a combination of functions and script.
For each filterable column criteria needed on the sheet I matched resulting column numbers with arrayformulas, then matched them against against themselves to limit my column range.
Filter_Job_Return = arrayformula(filter(COLUMN(Job_Names_Range),Job_Names_Range=Job))
Filter_Date_Return = arrayformula(filter(COLUMN(Date_Support_Range),Date_Support_Range>=Date_Range_Beginning,Date_Support_Range<=Date_Range_End))
Filter_Columns_Match = ARRAYFORMULA(text(FILTER(Filter_Job_Return_Range,ISNUMBER(MATCH(Filter_Job_Return_Range,Filter_Date_Return))),"0"))
For the variable row I needed I did a similar filter to return the row number for that employee, though a similar matching logic of the columns can be adapted to rows to remove potential duplicates
Example:
Need: Job# 4, Between Dates 01/15/20xx and 03/15/20xx, Employee Name: Joe
Update Projected Hours to 15
Labels | Col 2 | Col 3 | Col 4 | Col 5 |
---|---|---|---|---|
Job_Return | 2 | 4 | 4 | 6 |
Date_Return | 01/01/20xx | 02/01/20xx | 03/01/20xx | 04/01/20xx |
Columns_Match | 3 | 4 | ||
Row_Return | 6 | |||
Employee Names | Projected Hours | Projected Hours | Projected Hours | Projected Hours |
Joe | 10 | 10 | 10 | 10 |
Macro to Replace selected values:
function ReplaceProjectedHours(){
const ss = SpreadsheetApp.getActive();
let sheet = ss.getSheetByName("Projected Hours"); //pulls from sheet by name
const projectedHoursPerWeek = ss.getRangeByName("Projected_Hours_per_Week").getValues();
let row = ss.getRange("B4").getDisplayValue(); //pulls cell from upper left corner of spill array formula if multiple results are given
let colArray = ss.getRangeByName("Filter_Columns_Match").getDisplayValues();
//Logger.log("colArray.length:" + colArray.length);
//Logger.log(colArray);
//https://stackoverflow.com/questions/61780876/get-x-cell-in-range-google-sheets-app-script
for(let j = 0; j < colArray[0].length; j++) {
if (colArray[0][j] !== "") { // Check if the cell is not empty
sheet.getRange(row, colArray[0][j]).setValues(projectedHoursPerWeek); //sets values based on position in array of object
}
}
}
Labels | Col 2 | Col 3 | Col 4 | Col 5 |
---|---|---|---|---|
Job_Return | 2 | 4 | 4 | 6 |
Date_Return | 01/01/20xx | 02/01/20xx | 03/01/20xx | 04/01/20xx |
Columns_Match | 3 | 4 | ||
Row_Return | 6 | |||
Employee Names | Projected Hours | Projected Hours | Projected Hours | Projected Hours |
Joe | 10 | 15 | 15 | 10 |
And the macro replaces the match cells over non-continuous ranges based on the matching criteria
Can you please provide your complete nodejs code? I was able to use the latest 3.7 gremlin-javascript driver to execute a comparable query against my local gremlin-server populated with the sample modern graph:
const dc = new DriverRemoteConnection('ws://localhost:8182/gremlin');
const g = traversal().withRemote(dc);\
const graphTraversal = await g.V().hasLabel('person').has('age', 29).toList();
console.log(graphTraversal)
The output I received:
[
Vertex {
id: 1,
label: 'person',
properties: { name: [Array], age: [Array] }
}
]
I was also able to use a Neptune notebook to execute a comparable query against a graph loaded with the sample air-routes data:
%%gremlin
g.V().hasLabel('airport').has('code', 'LAX')
Output:
v[13]
Maybe something changed in Jest 30, but the accepted answer is not working for me. I had to do this in my global.d.ts
file.
import 'jest'
declare global {
namespace jest {
interface Expect {
customMatcher(expected: string): CustomMatcherResult
}
}
}
in Program try
Test.IServer server = new Test.Server();
perhaps then you can call your MethodImpl like server.MethodImpl(params)
because Interface(!) is providing your method in COM. Like calling COM from VBA - you don't care about CoClass, because you reference both COM and Class from typelib, last provides just interface for connection from which you call Method. I don't know COM providing classes. CoClass is being compiled with server, not client - you should not care about it in client.
p.s. but, frankly speaking, I have the same problem with NetSxS example with NET.Core 5.0 Reg-free COM
There seems to be this prop for selected text, but nothing for the placeholder:
selectedTextProps={{allowFontScaling: false}}
Another way for this to not work is for C-space to be mapped to something else in your Desktop. On my Mac, C-Space was mapped Input Sources -> Select the previous input source.
I got a response here: https://github.com/tbroyer/gradle-errorprone-plugin/issues/125
As stated on this page: https://errorprone.info/bugpatterns, there are two categories of bug patterns - On by default
and Experimental
RemoveUnusedImport
is marked as Experimental
, which means it’s not enabled by default.
Some tags cause Doxygen to fail. For example "@code". Try to simplify the documentation comments in your code and then re-run Doxygen.
any solution on this? addIndex is totally useless if you change the sorting. it is not added on 0 position (top row). thanks
There is always a balance to find between debugging easyness and security. Access tokens could be truncated for better safety, but as they live a few hours and are displayed at DEBUG level, this is acceptable. That said, a PR to improve that will be welcomed.
I must say that after years this issue is still there.
IDB access using index works correctly in major browsers but iOS Safari still suffers.
You should move also the Image
instantiation in the ui.access(...)
block
Installing previous versions of react-popper may solve the problem.
---
- name: List Local Users
hosts: all
gather_facts: false
tasks:
- name: Get local user information
getent:
database: passwd
register: passwd_entries
- name: Display local users
debug:
msg: "Local user: {{ item.key }}"
loop: "{{ passwd_entries.ansible_facts.getent_passwd | dict2items }}"
when: item.value[6] != '/usr/sbin/nologin' and item.value[6] != '/bin/false'
Can you test it this way?
I did downgrade to node 20.12.1, as @ Sebastian Kaczmarek mentioned and it worked
My bug when get dependencies: speech_to_text: ^7.2.0
. Comment it and work!
Google refresh tokens can expire for a few different reasons, you can read this documentation for more information: https://developers.google.com/identity/protocols/oauth2#expiration
have you find the reason behind this issue?
I'm getting the exact same error message, could you please share the resolution if you have
You are connecting to a v3 instance, that is correct.
v2 SDK writes are supported via compatibility endpoints, but Flux queries are not officially supported and may break.
For long-term stability and performance, suggest migrating to the v3 SDK and native APIs for both writing and querying data.
For more guidance, see the table in this blog: https://www.influxdata.com/blog/choosing-client-library-when-developing-with-influxdb-3-0/
So for anyone else having this error, this was a doozy. In this instance, for some reason apache has created an actual file in the /etc/apache2/sites-enabled/ folder (not be be confused with the sites-available folder). You need to delete the virtual-hosts.conf file from there:
sudo rm /etc/apache2/sites-enabled/virtual-hosts.conf
and then run:
cd /etc/apache2/sites-available
sudo a2ensite *
This will create a symbolic link in the sites-enabled folder (so it’s not longer a “real file”).
I've have no idea how this happened as I didn't even know the "sites-enabled" folder existed so certainly did put anything in there!?
Whilst i won't accept this as the answer i'd like to point out that after further tests the last code sample below seems to return PCM data. I made a waveform visualisation and the data returned includes values range from negative to positive in a wave like structure which can be passed into an FFT window and then into an FFT calculation.
// audio file reader
reader = new Mp3FileReader(filename);
byte[] buffer = new byte[reader.Length];
int read = reader.Read(buffer, 0, buffer.Length);
pcm = new short[read / 2];
Buffer.BlockCopy(buffer, 0, pcm, 0, read);
Here a good configuration to start and stop 2 vertx applications (which both deploy several verticles).
The commented part is for optional waiting for applications starts (we can force test to wait @beforeAll if we prefer).
<profiles>
<!-- A profile for windows as the stop command is different -->
<profile>
<id>windows-integration-tests</id>
<activation>
<os>
<family>windows</family>
</os>
</activation>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>${properties-maven-plugin.version}</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>${maven-failsafe.version}</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec-maven-plugin.version}</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<urls>
<url>file:///${basedir}\src\test\resources\test-conf.properties</url>
</urls>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>Displaying value of 'testproperty' property</echo>
<echo>[testproperty] ${vortex.conf.dir}/../${vertx.hazelcast.config}</echo>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>start-core</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>${java.home}/bin/java</executable>
<!-- optional -->
<workingDirectory>${user.home}/.m2/repository/fr/edu/vortex-core/${vortex.revision}</workingDirectory>
<arguments>
<argument>-jar</argument>
<argument>vortex-core-${vortex.revision}.jar</argument>
<argument>run fr.edu.vortex.core.MainVerticle</argument>
<argument>-Dconf=${vortex.conf.dir}/${vortex-core-configurationFile}</argument>
<argument>-Dlogback.configurationFile=${vortex.conf.dir}/../${vortex-core-logback.configurationFile}</argument>
<argument>-Dvertx.hazelcast.config=${vortex.conf.dir}/../${vertx.hazelcast.config}</argument>
<argument>-Dhazelcast.logging.type=slf4j</argument>
<argument>-Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.SLF4JLogDelegateFactory</argument>
<argument>-cluster</argument>
</arguments>
<async>true</async>
</configuration>
</execution>
<execution>
<id>start-http</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>${java.home}/bin/java</executable>
<!-- optional -->
<workingDirectory>${user.home}/.m2/repository/fr/edu/vortex-http-api/${vortex.revision}</workingDirectory>
<arguments>
<argument>-jar</argument>
<argument>vortex-http-api-${vortex.revision}.jar</argument>
<argument>run fr.edu.vortex.http.api.MainVerticle</argument>
<argument>-Dconf=${vortex.conf.dir}/${vortex-http-configurationFile}</argument>
<argument>-Dlogback.configurationFile=${vortex.conf.dir}/../${vortex-http-logback.configurationFile}</argument>
<argument>-Dvertx.hazelcast.config=${vortex.conf.dir}/../cluster.xml</argument>
<argument>-Dhazelcast.logging.type=slf4j</argument>
<argument>-Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.SLF4JLogDelegateFactory</argument>
<argument>-Dagentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005</argument>
<argument>-cluster</argument>
</arguments>
<async>true</async>
</configuration>
</execution>
<!-- <execution>-->
<!-- <id>wait-server-up</id>-->
<!-- <phase>pre-integration-test</phase>-->
<!-- <goals>-->
<!-- <goal>java</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <mainClass>fr.edu.vortex.WaitServerUpForIntegrationTests</mainClass>-->
<!-- <arguments>20000</arguments>-->
<!-- </configuration>-->
<!-- </execution>-->
<execution>
<id>stop-http-windows</id>
<phase>post-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>wmic</executable>
<!-- optional -->
<workingDirectory>${project.build.directory}</workingDirectory>
<arguments>
<argument>process</argument>
<argument>where</argument>
<argument>CommandLine like '%vortex-http%' and not name='wmic.exe'
</argument>
<argument>delete</argument>
</arguments>
</configuration>
</execution>
<execution>
<id>stop-core-windows</id>
<phase>post-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>wmic</executable>
<!-- optional -->
<workingDirectory>${project.build.directory}</workingDirectory>
<arguments>
<argument>process</argument>
<argument>where</argument>
<argument>CommandLine like '%vortex-core%' and not name='wmic.exe'</argument>
<argument>delete</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
And content of property file :
vortex.conf.dir=C:\\prive\\workspace-omogen-fichier\\conf-avec-vortex-http-simple\\conf\\vortex-conf
vortex-core-configurationFile=core.conf
vortex-core-logback.configurationFile=logback-conf\\logback-core.xml
vortex-http-configurationFile=http.conf
vortex-http-logback.configurationFile=logback-conf\\logback-http-api.xml
vortex-management-configurationFile=management.conf
vortex-management-logback.configurationFile=logback-conf\\logback-management.xml
vertx.hazelcast.config=cluster.xml
.parent:has(+ ul .active) {
background: red;
}
Lucky solution is
<label for="look-date">Choose the year and month (yyyy-MM):</label>
<input type="month" th:field="${datePicker.lookDate}" id="look-date"/>
but it is important to change type to java.util.Date
@Data
public class DatePickerDto implements Serializable {
@DateTimeFormat(pattern = "yyyy-MM")
private Date lookDate;
private String dateFormat = "yyyy-MM";
}
How to enable html form to handle java.time.LocalDate
? 🤔 I don't know
Possible issues:
Too few epochs, 20 is too low to a decent convergence. Leave it to the default, or start with at least 100. You can even put a higher number like 1k and set some early stopping strategy.
80 images is also kind of low. Try to increase it to at least 1k. You can synthetically increase it by using data augmentation, such as Albumentation library. There is a way to synthesize images during yolo training, please take a look on Yolo docs for configuring this properly. I would use mostly the lightning, contrast, rotation, translation, crop and eraser functions.
If your input image is large, specially the one taken from distance, you might get better accuracy working on a sliding window. The easiest way might be SAHI (https://docs.ultralytics.com/guides/sahi-tiled-inference/)
You can write a helper function to perform the transformation:
function formatDate(year) {
if (year < 0) {
return `${Math.abs(year)} BCE`;
} else {
return `${year}`;
}
}
Then, you can call the helper using, for example, formatDate(-3600)
to get "3600 BCE".
Update 2025: This problem still exists, but I'm building a comprehensive solution
The core issue remains - OpenAI's API is stateless by design. You must send the entire conversation history with each request, which:
Increases token costs exponentially with conversation length
Hits context window limits on long conversations
Requires manual conversation management in your code
Current workarounds:
Manual history management (what most answers suggest)
LangChain's ConversationBufferMemory (still sends full history)
OpenAI's Assistants API (limited, still expensive)
I'm building MindMirror to solve the broader memory problem:
Already working: Long-term memory across sessions
Remembers your projects, preferences, and goals so you don't re-introduce yourself or the way you tackle challenges/problems
Works with any AI API through MCP standard (also Claude Code, Windsurf, Cursor etc)
$7/month unlimited memories (free trial: 25 memories)
Coming soon: Short-term context management
Persistent conversation threads across AI models
Intelligent context compression to reduce token costs
Easy model switching while maintaining conversation state
My vision: Turn AI memory from a "rebuild it every time" problem into managed infrastructure. Handle both the immediate context issue (this thread) and the bigger "AI forgets who I am" problem.
Currently solving the long-term piece: https://usemindmirror.com
Working on the short-term context piece next. The memory problem is bigger than just conversation history - it's about making AI actually remember you and make adapt to your needs, preferences, wants etc.
How about of renaming all those tables?
It seems doing "Hide copilot" in the menu really removes all AI and Copilot appearances.
So here is the answer, thank me later: You need to actively sender.ReadRTCP() and/or receiver.ReadRTCP() in a go routine loop in order to get that stats.
Ok, I am an IDIOT !!!
I went back and traced not only the code within this function, but step by step leading up to it. I found a line of code that removed the reference placeholder for the DOM element before the DataTables ever got called so I was trying to apply the DataTables code to a non existent DOM element!!!
Thanks to all those that replied.
You could probably code a module to have an infinite space of memory that you could use as SWAP or logical hard disk partition.
This works:
wp core update-db --network
I had this problem, target/classes had the .class updated, but the .war had old .class
after hours i found that MyProj/src/main/webapp/WEB-INF/class/xxx/yyy had the old classes. I just deleted it