Solved the problem by replacing localhost with my actual local ip address (e.g http://192.168.2.32:8000).
Seems like there's a problem for ClaudeDesktop to call localhost addresses by security reasons.
you can get your local ip address on macOS/unix using this command:
ifconfig | grep "inet " | grep -v 127.0.0.1
Just to make it more clear:
// replace
fetch(localhost:8000)
// with
fetch(http://192.168.2.32:8000)
Instead of defining do_EOF(), add a precmd hook that maps "EOF" to "exit":
def precmd(self, line):
if line == "EOF":
return "exit"
return line
Tried multiple times with the last option ie -with-openss=/usr/local/ssl and did sudo make it still fails to make the ssl module and keeps complaining the same
Could not build the ssl module!
Python requires an OpenSSL 1.0.2 or 1.1 compatible libssl with X509_VERIFY_PARAM_set1_host().
LibreSSL 2.6.4 and earlier do not provide the necessary APIs, https://github.com/libressl-portable/portable/issues/381
Encoding context to seq of diff of doc formats needs help depending on the tech environment.
thanks for the answer and feedback. on the other hand(which is in my case) if someone want to remove just specific suggested fields the code can be rewritten like this:
@api.model
def fields_get(self, allfields=None, attributes=None):
res = super().fields_get(allfields, attributes=attributes)
fields_to_remove_from_custom_search = ['field_1','field_2']
for field in fields_to_remove_from_custom_search:
if res.get(field):
res[field]['searchable'] = False
res[field]['sortable'] = False
return res
My defineConfig looks like this:
export default defineConfig({
schema: './src/lib/server/db/schema.ts',
dbCredentials: {
url: process.env.DATABASE_URL
},
verbose: true,
strict: true,
dialect: 'postgresql'
});
You don't need to set all the connection details. They are included in the database url.
DATABASE_URL="postgres://user:password@host:port/db-name"
Supabase refers to this as the connection string: https://supabase.com/docs/guides/database/connecting-to-postgres#direct-connection
when using position: fixed, the properties, top, right, left or bottom are required so in your case you should add top and right property to the class drawer to define where the drawer should be placed on screen. Also add z-index so that it stays on top of other components. screenshot with added properties
I found it!
In the settings.xml file I surrounded the password between "" like "passwordwith&therestofthepassword"
<profile>
<id>keystorePassword</id>
<properties>
<keystorePassword>passwordwith&therestofpassword</keystorePassword>
</properties>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
</profile>
You might need to enable (Commit) the changes by going to the Cart and select Commit Changes, only then the changes will take effect in the domain.
Regards
You could create 2 seperate projects to do it.
did you find a solution to your problem? I am facing the same issue..
Thanks to this thread I managed to construct the plot that I need.
I managed to color the y axes as well in a specific color.
However I've been trying to:
Color the points in that same way (instead of the default red, green, blue) for hours, can't find the solution... Any help? Thanks a lot!
Less importantly I wanted to connect dots with the same color (it's a timeline)
For those arriving here from a google search, make sure you aren't naming a normal function with the prefix "use". That'll turn it into a hook :)
I think the problem is when importing the TopRecommendationsComponent component, which is not being imported in Tab1Page, and another problem you could have is explicitly declaring standalone as false in the Tab1Page.
I'm having a similar problem. I created a private repo, pushed content, and now I want to share it. But I cannot get access to share it. It says I have 2FA configured, but I don't remember doing that, and it may have been on an earlier phone. There seems to be no way to un-configure and re-configure 2FA, and no way to find out which app it thinks I'm using on my phone. I've wasted most of an afternoon on this with no progress.
Run your app again from scratch. Since its for android make sure to build with an android emulator or device.
I found a workoaround solution, drop the driver file in a folder that you have full grant access. Then point your script to that path.
I got this error when trying to "Allow Network Access" in the Partner account.
Is there anything we should do before that?
This issue solution is posted on MSDN community site:
function intToRomanianOrReverse($value, bool $array = true) {
static $nfh = new \NumberFormatter('@numbers=roman', \NumberFormatter::DECIMAL);
return $nfh->{$array ? 'format' : 'parse'}($value);
}
function intToRomanNumeral(int $value): ?string {
return intToRomanianOrReverse($value, true);
}
function romanNumeralToInt(string $roman): ?int {
return intToRomanianOrReverse($value, false);
}
Try:
Permissions Issue – Run the installer from a different user directory (e.g., C:\Users\YourUser\Downloads).
Corrupt Installer Settings – Try sfc /scannow in an admin command prompt and reboot.
Here's the steps to make this visible in REST.
Edit your dataset.
Add a formula.
Name it something you want to see appear in REST
Change output type to STRING
Select the field which appears in your data grid already (assuming you encountered this problem and already have the column/data field in your analytics query)
The field will appear on the report w/ this format:
Save, drag it into your report, remove the old column, update the alias using the info button on the dataset, save dataset, validate pulling via REST
Apache Doris 3.0.5 will have new json_object() function which does exactly what I need.
Thank you, Doris team for listening!
This is not obvious.
You need to login to the entra application (this could be first time you are visiting it ;-)
Create new secret
Go to your release and click on the Manage next to the Azure Resource Manager connection
You should have warning about secret expiration.
Click this Convert button.
From max in the comments
You seem to be somewhat confused about how this is supposted to work. Your YAML file just contains a reference to the class and doesn't actually serialize the class itself. You still need to actually load the code containing the class before you parse the YAML. Either through require or by autoloading with Zeitwerk.
So, I found the solution. The problem was that boto3 of version 1.36 corrupted the file, by adding x-amz-checksum-crc32 to the end of it. I downgraded the boto3 by using pip install "boto3<1.36" and everything works fine!
You might want to still use PHP's NumberFormatter and set the PARSE_INT_ONLY attribute. Here's its manual: https://www.php.net/manual/en/class.numberformatter.php. You can try something like this:
function romanToInt(string $roman) {
static $nf = null;
if ($nf === null) {
$nf = new \NumberFormatter('@numbers=roman', \NumberFormatter::DECIMAL);
$nf->setAttribute(\NumberFormatter::PARSE_INT_ONLY, true);
}
return $nf->parse($roman);
}
It is possible to do this. For a relevant example and further information, take a look at this repository, which may be helpful in understanding and implementing a solution:
https://github.com/mmushfiq/springboot-microservice-common-lib
There is an open ticket for this specific issue in vscode's repo.
Try to put in the "div" that contains the class "slick-slide", the style:
z-index: -1;
Ensure you've appended ->fill() to your form's schema: https://filamentphp.com/docs/3.x/forms/fields/getting-started#setting-a-default-value
The fact that you don't get corr=1 at the expected location is because the normalization in Mathlab (and possibly also in other packages) is not performed correctly. A cross correlation is in fact a "sliding dot product" between the two data arrays. To get -1<=corr<=1 one can get the cos(theta) related to the dot product, which means that for each window one has to normalize the unnormalized corr by dividing by the product of the two vector lengths that is currently in use.
You can find the correct formula in eq. (1) of the following article
Well,
I found the issue.
The event was different because it was created on a different calendar id.
The 3rd party system that created the event defined it on there own calendar, which is different from regular events that are sent to invitees.
so the only thing was to use the calendar id that is on the event:
event.getOriginalCalendarId()
and use it when calling Calendar.Events.get(calendarId, eventId)
Found an asnwer:
history package can still be used with react-router v.7 same as in the code in the question via unstable_HistoryRouter as follows:
import { unstable_HistoryRouter as Router, Routes, ... } from "react-router";
Key point is to add history listener before Router adds its listener, otherwise react-router throws an error disallowing more than 1 history listener.
I added cg = censusgeocode.CensusGeocode() to my code, but it's still not working. When I try to get the census tract with census_tract = cg.coordinates(x=newLoc.latitude, y=newLoc.longitude), it doesn't work as expected. I used newLoc to get the coordinates from the address. Can you help me figure out whats wrong?
Please see the code example in the original post that shows how I handled
Leaving only the Vue - Official extension and removing the Dart extension solved my problem
I think it depends on if you want your database table to enforce non-nulls and FK constraints. If you don't need your database table to do this kind of enforcement then you aren't breaking any contracts about the integrity or completeness of the rows in a single table. If you want to have your database table do this kind of "checking" then you'd need a second table that allows for "missing-ness" in your draft.
fixed it by removing flutter completely and reinstalling through vscode
In Magento 2, the decimal separator behavior is determined by the locale settings and the format of the numbers being displayed. Since the comma is being shown as the decimal separator in the customizable options dropdown, it may be due to the locale settings or a specific issue with number formatting in your theme.
You are using strict equality operator ===, make sure your value of user type in database and in the code are same.
what is wrong with my code? every time i run it and input the set username it just makes me input my username again.
import time
username = "229796"
password = "rwqfsfascx"
yes = "update my system"
no = "continue to browser"
count = 0
while count < 3:
user_choice = input('Enter username.')
if user_choice == username:
print('Remember this code: 17283645.')
count = 4
else:
print('Login unsuccessful, please try again.')
count += 1
time.sleep(2) # Pause for 2 seconds
while count < 3:
user_choice = input('Enter password.')
if user_choice == password:
print('Access granted.')
count = 4
else:
print('Login unsuccessful, please try again.')
count += 1
print('Initiating system start. Please wait...')
time.sleep(3) # Pause for 3 seconds
print('System activated.')
time.sleep(1) # Pause for 1 second
user_choice = input('Would you like to update your system or continue directly to the browser?')
if user_choice == yes:
print('Beginning system update. Please wait.')
time.sleep(7) # Pause for 7 seconds
print('System update complete. Opening browser.')
elif user_choice == no:
print('Opening browser.')
else:
print('Error 136. Please reload the system.')
time.sleep(2)
print('Browser activated. Opening .pjycs37 coding platform.')
time.sleep(2)
user_choice = input('Choose a programming language.')
print('Activating', user_choice, 'code platform.')
time.sleep(3)
print(user_choice, 'code platform activated.')
Regarding why JSSE is not offering EMS in the ClientHello, here is the explanation.
When FIPS-mode is turned on, the Red Hat Build of OpenJDK ensures all the cryptographic primitives come from the SunPKCS11 security provider, configured with NSS as its PKCS #11 back-end. This is a FIPS requirement, since NSS acts as the cryptographic module, subject to the FIPS-validation process. Even though the EMS extension is implemented in the SunJSSE provider, it requires a specific key-derivation primitive to be available through the SunPKCS11-NSS-FIPS provider. Given this primitive is not currently available, SunJSSE automatically disables the extension.
Why is it not available? Although an NSS vendor-specific mechanism exists, this hasn't been implemented in SunPKCS11 due to the lack of a standard PKCS #11 way of doing it. This is expected to be fixed in PKCS #11 v3.2, and we will be able to implement the required SunPKCS11 enhancement allowing the support of EMS in FIPS-mode (when that version of the standard is released).
I tested on Android version 13, and it doesn't work there as well. I have been testing on different Android devices with different Android versions. According to all my tests, this package only supports Android 14. Regarding iOS, I have been working with iOS 17 and 18, and it work very well.
I just moved the project file into another new empty folder in same directory and this worked for me.
I know this is old, but at least as of now, it's pretty streamlined. If you open the GitLens sidebar in VS Code there should be a "Connect" button at the top next to icons of GitHub, GitLab, ADO, etc. Once clicked in the browser you select the respective service to connect with, in this case Azure DevOps, and it should redirect to ADO and request permissions.

Similar solution without having to explicitly specify columns or column index:
df = data.frame(person=c("a","b","c"),var1=c(1,2,3),var2=c(4,5,6))
df %>% mutate(total=rowSums(.[names(.)[names(.)!="person"]]))
The same thing happened, I was spending about a half day writing fixes of my date saving logic and then finally connected to DB from local pgadmin and saw everything was ok. It's just the GUI of railway Postgres that breaks the date.
This is very confusing but somehow on API 29 the system navigation bar does not get covered but does on API 33. I only managed to avoid the overlay overlapping with navigation bar is my subtracting window height (currentWindowMetrics) with navigation bar inset (these APIs are available on API 30) along with setting the layout gravity.
I also experimented with several flags but none worked. In conclusion this change was added somewhere between API 30 and 33.
Tried by both options, none helped .. still getting same error from "flutter doctor"
Option 1:
1. url -L https://get.rvm.io | bash -s stable
2. rvm install ruby-3.4.2
3. rvm use ruby-3.4.2
4. rvm --default use 3.4.2
5. sudo gem install cocoapods
Option 2:
1. rvm uninstall ruby-3.4.2
2. /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
3. brew install ruby
4. updated .zshrc, to point PATH to /opt/homebrew/opt/ruby/bin
5. source ~/.zshrc
6. brew install cocoapods
In both options, validated that ruby version is 3.4.2 by running "ruby -v" command.
Getting same error and not able to overcome this error. Have been working to overcome it since two days. Any help would be appreciated.
Thanks!
Still in March 2025 and it is the only reference also compatible with Doctrine migrations :-)
I actually think I have an option here in a not destructive way... albeit a touch hacky
What I am doing is getting a list of all the top level files and running a move for each of those items in a try catch...
Move to (pathname + ".temp") and back
This will fail if there is any user control issues (open folders or greedy processes) and succeed if all processes will continue without said file / directory.
This catches most of my edge cases, and may help someone else whose trying the same
I didn't code it already, but the most efficient way i can think of is by factoring and using the the generated Power set. Here's the basic idea.
Factoring
if a number is not prime, it can be factored into a product of primes. For instance:
4 = 2 * 2
6 = 2 * 3
66 = 2 * 3 * 11
22.308 = 2 * 2 * 3 * 11
Now let's look at the divisors of 66, which we can deduce from it's factored state. They are:
1
2
3
11
2 * 3 = 6
2 * 11 = 22
3 * 11 = 33
2 * 3 * 11 = 66
But notice you'll find a similar result if you calculate the powerset of {2, 3, 11}, which is
{ {}, {2}, {3}, {11}, {2, 3}, {2, 11}, {3, 11} , {2, 3, 11}}
Note the size of the resulting set is 8, the number of divisors of 66. There's a property of power sets which states that the size of a powerset is 2^n, where n is the size of the original set. So you should be able to find the number of divisors of a number by factoring it and passing it as power of 2.
But this does not work for all numbers. For instance, the same strategy won't work for 22.308, because the number 2 appears twice when factored.
Were you able to resolve the issue?
To retrieve the view with the tag GoogleMapCompass in Kotlin, similar to the Java solution, you can adjust the position by modifying the rules of the RelativeLayout.
Here is an example where the button is positioned 50 pixels from the right and 150 pixels from the bottom:
val compassButton = (supportFragmentManager
.findFragmentById(R.id.map) as SupportMapFragment)
.requireView()
.findViewWithTag<View>("GoogleMapCompass");
val rlp = compassButton.layoutParams as RelativeLayout.LayoutParams
rlp.addRule(RelativeLayout.ALIGN_PARENT_END)
rlp.addRule(RelativeLayout.ALIGN_PARENT_BOTTOM)
rlp.removeRule(RelativeLayout.ALIGN_PARENT_START)
rlp.removeRule(RelativeLayout.ALIGN_PARENT_TOP)
rlp.bottomMargin = 150
rlp.rightMargin = 50
For further customizations of the compass, you will need to create your own implementation.
In my case, I had forgotten to add the receiver to Manifest. Adding these lines fixed the issue:
<receiver
android:name=".AlarmReceiver"
android:exported="true" />
Node.js doesn’t seem to support directory imports based on this documentation. You have to explicitly point to your .js file for example:
import …from “./database/index.js”
You may also find the suggestions in this old SO post helpful for your case.
I am facing the same issue while creating an instance of Db2 lite. I tried mentioning the version:11 in tags column in the configure your resource tab. It didn't work. Let me know if you got any solution or work around for that.
I have just released a framework for 6DoF object detection and tracking in the web browser. I works nicely on mobile, even low-end devices. It is released on Github under MIT license here: https://github.com/WebAR-rocks/WebAR.rocks.train
As far as I know there’s no dedicated test API key for Google Cloud Translation API. But there’s some strategies to mitigate costs during your development like applying API key restrictions such as IP addresses and HTTP referrers. For example when creating your API key, restrict access to specific IP addresses, This limits usage to your local development machines or your test server's IP and prevents accidental or unauthorized access from other sources.
But if you just really want to do a lot of API Key testing, then maybe the 90-day $300 free trial for Google Cloud is another option for you.
My bad, The AsyncNotifier automatically wraps the state into AsyncValue. So in my code, its a AsyncValue inside a AsyncValue which gave this behaviour. Converting state of AsyncValue<VirtualBook> to just VirtualBook solved the issue.
Your post is a bit tricky because it asks about stopping a thread (we'll call that "X") but strongly implies that reading SerialPort data is the ultimate goal (that we'll call "Y"). You said:
Any pointers for problems or a better / alternative way of doing it or any help is greatly appreciated.
In keeping with this, I'll try and share what's worked for me for both X and for Y.
Stopping the Thread (X)
If the code that you posted is the code you wish to keep (i.e. having a polling loop) then you might want to experiment with making the loop asynchronous so that you don't lose the UI thread context while the background work proceeds on an alternate thread. In this snippet:
CancellationToken to exit the delay immediately.OperationCancelled exception.throw if it's been cancelled.public partial class MainForm : Form
{
public MainForm()
{
InitializeComponent();
checkBoxToggleServer.CheckedChanged += async (sender, e) =>
{
if (checkBoxToggleServer.Checked)
{
_cts?.Cancel();
// Wait for previous run (if any) to cancel
await _awaiter.WaitAsync();
_cts = new CancellationTokenSource();
try
{
txtbox_log.AppendText("Serial Server Started", true, Color.Green);
while (true)
{
_cts.Token.ThrowIfCancellationRequested();
txtbox_log.AppendText(
$@"[{DateTime.Now:hh\:mm\:ss\ tt}] TEST! I'm running", true, Color.Blue);
await Task.Delay(TimeSpan.FromSeconds(2.5), _cts.Token);
// "do some more serial stuff here"
}
}
catch (OperationCanceledException)
{
txtbox_log.AppendText("Serial Server Canceled", true, Color.Maroon);
checkBoxToggleServer.Checked = false;
_awaiter.Wait(0);
_awaiter.Release();
}
}
else
{
if (_cts is not null && !_cts.IsCancellationRequested) _cts.Cancel();
}
};
}
SemaphoreSlim
_awaiter = new SemaphoreSlim(1, 1),
_criticalSection = new SemaphoreSlim(1, 1);
CancellationTokenSource? _cts = null;
}
AppendText is an Extension for RichTextBox.static class Extensions
{
public static void AppendText(this RichTextBox @this, string text, bool newLine, Color? color = null)
{
var colorB4 = @this.SelectionColor;
if(color is Color altColor) @this.SelectionColor = altColor;
@this.AppendText($"{text}{(newLine ? Environment.NewLine : string.Empty)}");
@this.SelectionColor = colorB4;
}
}
Reading Serial Port Data
What I have found is that retrieving asynchronous data from a SerialPort takes on a different flavor because we're often listening to the DataReceived event and responding on an interrupt basis. This code snippet:
DataReceived event (in this case, using an inline Lambda method).public partial class MainForm : Form
{
SerialPort _serialPort = new();
public MainForm()
{
InitializeComponent();
_serialPort.DataReceived += async (sender, e) =>
{
await _criticalSection.WaitAsync();
if (!IsDisposed) BeginInvoke((MethodInvoker)delegate
{
try
{
if (sender is SerialPort port)
{
while (port.BytesToRead > 0)
{
byte[] buffer = new byte[16];
int success = port.Read(buffer, 0, buffer.Length);
BeginInvoke(() =>
{
txtbox_log.AppendText($@"[{DateTime.Now:hh\:mm\:ss.ff tt}] ", false, Color.CornflowerBlue);
txtbox_log.AppendText( BitConverter.ToString(buffer, 0, success).Replace("-", " "), true);
});
}
}
}
finally
{
_criticalSection.Release();
}
});
};
checkBoxToggleServer.CheckedChanged += (sender, e) =>
{
if (checkBoxToggleServer.Checked)
{
_serialPort.Open();
txtbox_log.AppendText($"Serial Server Started", true, Color.Green);
}
else
{
_serialPort.Close();
txtbox_log.AppendText("Serial Server Canceled", true, Color.Maroon);
}
};
}
SemaphoreSlim
_awaiter = new SemaphoreSlim(1, 1),
_criticalSection = new SemaphoreSlim(1, 1);
CancellationTokenSource? _cts = null;
}
Turns out, The answer was under my nose. I just looked through my notes and found out I had to use this command:
ps aux --sort -%mem | paste -d ' ' > running_processes.csv
I'm not sure where I heard it or read it, but the way I know is that you want a single owner for every issue (user story, bug or task). There may be different subtasks of that user story that may be assigned to other people, but ultimately the owner of the user story reflects the person that is ultimately responsible to make sure that user story gets done. That person should be the developer that will do the work. The developers usually prepare a testing subtask for the user story when they are tasking it for themselves. The testing subtask may be re-assigned to other testers if needed and testers may create additional testing subtasks based on what they see. If they find bugs, they can create subtasks under that user story to fix it before the sprint end. If it looks like the bug will not be fixed before the sprint finishes, it can be turned into its own bug (issue, not subtask). But ultimately, the developer is responsible to make sure it goes through all phases.
According to Google Support, Autocomplete (NEW) Requires BOTH: Places (New) API + Places Legacy to be enabled (who knows why?) -- I was instructed to enable Legacy by going to: https://console.cloud.google.com/apis/library/places-backend.googleapis.com
Once that is enabled, the Autocomplete element worked as expected when using their Sample code: https://developers.google.com/maps/documentation/javascript/examples/place-autocomplete-element
According to Google Support, Autocomplete (NEW) Requires BOTH: Places (New) API + Places Legacy to be enabled (who knows why?) -- I was instructed to enable Legacy by going to:
https://console.cloud.google.com/apis/library/places-backend.googleapis.com
Once that is enabled, the Autocomplete element worked as expected when using their Sample code: https://developers.google.com/maps/documentation/javascript/examples/place-autocomplete-element
Visual isTranslatable: NO; reason: observation failure: noObservations is due to the default enabled Vision Text analysis that occurs for many views, You can disable this by updating fields like allowsVideoFrameAnalysis to false. I think that log is a warning that that subsystem does not detect any VisionKit based OCR output. (ie text on screen)
Please share the AVMutableComposition code, and not just the instruction. The Composition creation is missing, and i believe the timing there is wrong, as you have a 00:00:00 to 00:00:00 time. This tells me you did not insert an empty track and add a gap of 20 seconds to your MutableComposition.
function isGmail($email) {
$email = trim($email);
$validate = substr($email, -10);
if ($validate == "@gmail.com"){
return true;
}
return false;
}
I've it up in gunicorn.conf.py file as given here:
proc_name = "gunicorn"
default_proc_name = "gunicorn"
I referred the gunicorn docs https://docs.gunicorn.org/en/stable/settings.html
And run the service like:
gunicorn --name "gunicorn" -c gunicorn.conf.py wsgi:app
Even if you don't specify the --name flag, the default is set to be gunicorn.
and when the gunicorn service starts, it sets the values correctly and logs like:
deactivate # Exit venv
venv\Scripts\activate # Reactivate (Windows)
source venv/bin/activate # Reactivate (Mac/Linux)
streamlit run your_script.py
run this in terminal
and run vs code as administrator
PHP is a server side script which means it can run processes, download files or initiate header redirections without outputting any HTML.
This is a known issue: Poetry Issue 10032 More context in 10031 and 10033 issues too.
Even with package-mode set to false, Poetry 2.0 mandates the presence of the project.name configuration but should be disallowed according to PEP 621.
I believe some documentation updates have been made to clarify this as shown in issue 1033.
I believe the issue lies in the following line:
Image.fromarray(np.asarray([blueAmount,blueAmount,blueAmount])
The numpy array you use here would have shape (3, x, y) rather than (x, y, 3) which may have been your intention.
The roles/redis.dbConnectionUser is in Beta and currently this is the only supported redis.clusters.connect for Cloud Memorystore Redis Db Connection User. Check this page to see the list of all basic and predefined roles for Identity and Access Management (IAM). Memorystore Redis roles
If you've managed to get a Module selected, you may be able to ignore the error and Continue Anyway. For me it runs Gradle, says "Install successfully finished," and then there's an error about running the project (presumably because it's in release mode on my physical watch, not remote debug mode in an emulator), but I'm still able to install the modified watch face on my device.
Did you guys find any way to click on skip ad button with JS ?
Have you added the required manifest declarations to indicate that you support messaging notifications?
According to their documentation, Angular v18+ requires PrimeFlex 4.x.x+
I've created a NodeJS package to deal with callbacks and I want to present it as a possible solution to avoid 'callback hell'.
It allows to run functions with callback arguments in parallel or in sequence, even accessing previous results (creating cascading calls, waterfall).
An example:
/*
Creates a log file from several other files using node:fs functions with callbacks.
*/
import { CB } from "callback-utility";
const logFile: string = path.resolve(__dirname, "mainLog.log"),
file1: string = path.resolve(__dirname, "file1.log"),
file2: string = path.resolve(__dirname, "file2.log"),
file3: string = path.resolve(__dirname, "file3.log"),
file4: string = path.resolve(__dirname, "file4.log");
// Create execution structure
const structCB =
CB.s ( // 🠄 sequential structure as root
// Delete current log file
CB.f ( fs.rm, logFile, {force: true}), // 🠄 Creates a function structure using CB.f()
// Create log from several files
CB.p ( // 🠄 parallel structure, since the order in which every file is written in
// log is not important (can be parallelized)
CB.s ( // 🠄 sequential structure
CB.f ( fs.readFile, file1, {encoding: 'utf-8'} ), // 🠄 read content
CB.f ( fs.appendFile, strLogFile, CB.PREVIOUS_RESULT1) // 🠄 write results from
// previous call to log file
),
// The same (in parallel) for every file ...
CB.s (
CB.f ( fs.readFile, file2, {encoding: 'utf-8'} ),
CB.f ( fs.appendFile, logFile, CB.PREVIOUS_RESULT1)
),
CB.s (
CB.f ( fs.readFile, file3, {encoding: 'utf-8'} ),
CB.f ( fs.appendFile, logFile, CB.PREVIOUS_RESULT1)
),
CB.s (
CB.f ( fs.readFile, file4, {encoding: 'utf-8'} ),
CB.f ( fs.appendFile, logFile, CB.PREVIOUS_RESULT1)
)
)
);
// Execute and retrieve results using Promise (async/await)
const objResult = await CB.e (structCB);
// Check results
if (objResult.timeout || objResult.error)
console.log("Something went wrong while creating the log");
else
console.log("Log created");
In the above example, 9 functions with callbacks were invoked and ran in parallel or in sequence, accordingly to the structure created to rule the execution.
All results can be retrieved at once.
Get, please, more info about it in npmjs.com/callback-utility
There are several good options available to address this issue, and I want to present a new one. I hope it will be useful.
"This webpage was reloaded because it was using significant energy safari" reloads the pages and disrupts the conversation in the page. I think it is general and chronic issue in Safari. Is there any solution and anyone experiencing the same problem?
After some hard debugging and searching, I think I found where I am wrong.
I declared Messages to be ObservableCollection, but it only Add operation will be notified to update the UI, .Text field change won't, to make the UI knows that .Text field changed in Messages, I need to let my ChatMessage class implement INotifyPropertyChanged
After step 1, any message.Text field changed in Messages should trigger UI to update, but because of AI inference is CPU heavy, so `generator.GenerateNextToken();` will block the main thread, so I my case the UI will be updated after the AI part.
To solve my problem, I should try make `generator.GenerateNextToken()` async, so its execution won't block the main thread, I need to dig into Microsoft.ML.OnnxRuntimeGenAI to find a async method or find another library that provide async method.
Solved using match()
# Match-Method
t3 <- Sys.time()
v2_match <- letter_class[match(v1, data_key)]
Sys.time() - t3
# Time difference of 0.821255 secs
The best solution I found was to use imports via the Keras API. Instead of
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
from tensorflow.keras.optimizers import Adam
I now use
from keras.api.models import Sequential
from keras.api.layers import LSTM, Dense, Dropout
from keras.api.optimizers import Adam
This way you can still import the objects directly, without needing to reference their parent. Note that this was tested in Linux using Tensorflow version 2.12 (which includes Keras 3).
API reference: Keras 3 API. Cheers!
The reference website says that release branches are optional.
trunkThe latter can happen because you might want to “harden” a release.
Which I interpret as cutting a release point, testing it, and incorporating
urgent fixes. Meanwhile trunk can continue its life with whatever
other changes which will not impact the release branch.
Keep in mind that there’s another dimension here:
A preemptive release branch (what we just discussed)
An after-release release branch
- - - ★ - ★ - ★ - ★ trunk
\v1.0
The latter here is relevant if you released 1.0, a bug was found and
you need some version like 1.0.1 with just that bug fix. But trunk
has many more commits at this point. But that’s not a problem. Just
check out the tag and make a release branch. Then you can incorporate
the change there.
How do you incorporate changes between the eternal trunk branch and
the releases?
The reference has its guideline for this:
The best practice for Trunk-Based Development teams is to reproduce the bug on the trunk, fix it there with a test, watch that be verified by the CI server, then cherry-pick that to the release branch and wait for a CI server focusing on the release branch to verify it there too.
Apparently this is the Trunk Based Development approach. But I disagree. This is not the correct approach if you want to handle changes in the best way with Git.
Take the bug on v1.0 example. Is the bug urgent enough to fix on top
of v1.0 and make a bug fix version? Then fix it there.
- - - ★ - ★ - ★ - ★ trunk
v1.0 - ★ (bugfix)
Then merge it into trunk:
- - - ★ - ★ - ★ - ★ - - - ★ (merge) trunk
v1.0 - ★ (bugfix) /
Now the upcoming v1.0.1 (or whatever it will be) will have the commit.
Just query it:
git tag --contains=<bugfix commit>
As does trunk. Just query it:
git branch --contains=<bugfix commit>
You cannot directly query it if you use cherry-picks.
You should also get less merge conflicts since there is less merge-base
drift when you avoid cherry-picks. You can imagine multiple releases
and multiple cherry-picks on top of release branches or trunk. That
means that Git has to go further back to calculate differences when
doing future merges.
And if you have multiple releases? Merge upwards from the oldest
release to trunk. For the release branches corresponding to these
tags:
v1.0.1 into 1.5.51.5.5 into 1.7.01.7.0 into trunkImagine having to use cherry-picks for all of that instead. The work compounds.
The merge approach works well when you apply the change to the correct
place from the start. But sometimes you might apply a fix to trunk
and then later figure out that you want it in some release branch as
well. Use cherry-pick in that case since that’s the only option anyway.
This should not be done according to the reference website:
You should not fix bugs on the release branch in the expectation of cherry-picking them back to the trunk. Why? Well in case you forget to do that in the heat of the moment. Forgetting means a regression in production some weeks later (and someone getting fired).
(Why not merge instead of cherry-pick?)
The emphasis on “forgetting” seems arbritrary here since their
recommended approach is to fix on trunk, then wait until CI passes,
then finally cherry-pick to the release branch. Well. What if you
forget to cherry-pick that way?
We might risk getting fired here according to their corporate crystal
ball. But fixing on the release branch and then merging to trunk is
both neater and prioritizes the most immediate need:
trunk with a fix that works well for the release
branch but not on `trunk** for some reason, which is a small
inconvenience compared to a broken releasetrunk before the harm is done
trunk and don’t worry about
forgetting it in the first placeIn TBD, where should version updates (pom.xml changes) happen?
This does not have anything to do with any version strategy. The Maven
Masters demand a build to have the correct version. So you need to
have that on whatever commit you choose to release on.
What’s the recommended way to handle multiple active versions in TBD without causing confusion or conflicts?
It is poorly thought out. See the reference website again:
Merge Meister role
The process of merging commits from trunk to the release branch using ‘cherry pick’ is a role for a single developer in a team. Or dev pair, if you are doing Extreme Programming. Even then, it is a part time activity. The dev or pair probably needs to police a list of rules before doing the cherry pick. Rules like which business representative signed off on the merge. Perhaps the role should also rotate each day.
Some teams update a wiki to audit what made it to the release branch after branch cut, and some use ticket system as this by its nature interrupting and requiring of an audit trail of approvals.
The “merging” here means cherry-picking all changes that are going into a release.
No thanks. All you need:
trunk is the default target for all developmenttrunk (because trunk
has moved on and has things that should not go into the fix release)trunkNow, as mentioned, it is simple to query exactly what commits are in what tags and branches. It’s simple to see the discrepancy between any two points in the history. No “wiki to audit” needed beyond the standard fare (maybe issue tracker keys from the commit messages).
p4 add -f -t symlink <dir1>, where -f is necessary
you can read the csv file like pandas python syntax using my lib https://github.com/hima12-awny/read-csv-dataframe-cpp
✔ Kodun eksik parçalarını tamamla. (Örneğin init() fonksiyonunun içinde ne olduğunu öğren.)
✔ Hatayı düzelt ve tekrar çalıştır. (method="init()')" yerine method="init()")
✔ Bu kodu ADF belgesinde nasıl kullanıldığını araştır.
✔ Kendi bilgisayarında test edip çıktıyı görmeye çalış.
Bu kod sana JavaScript ve ADF'in birleşimi gibi karmaşık geldiyse, önce JavaScript öğrenerek başlayabilirsin. Eğer kodun belirli bir kısmını anlamıyorsan, o kısmı ayırıp sorabilirsin! 🚀
You are violating Google terms of use! - they will not allow you to use automation on their websites or services repeatedly without having to pay an API license. Hence why they have anti-bot captchas to protect their systems.
Try picking another website to automate that will you to do such things!
I'm having the same problem when installing flash-attn on Windows. Unfortunately PyTorch no longer support conda installation!

I know this is an old discussion, but I think it's worth to add a probably better solution since I did not find any on the net. My scenario is that I want to move folders containing media files, movies, so copying takes a long time. Imagine moving a number of movies extracted from blu-ray disks and DVD's and you see what I mean.
Thing is that it is not possible to move a folder with content. But a file can be moved to another branch in the same folder tree as long as it is not moved outside the 'approved' folder tree. In my case it is approved by the user using the FolderPicker.
So, first I create a folder at the new location, then I rename the files to the path and name they will have in the folder at the new location, and finally remove the old folder.
I use a simple workaround. I paste the text:
first linesecond line
into any text editor (i.e. Light Notepad, but you can also do this inside xCode), add new line in proper place:
first line
second line
and copy two lines into the Localizable.xcstrings. It works.
I know too little about xml.... Found this: stackoverflow.com/questions/64243628/….
Looks like I had a namespace...
xml_ns(doc) output "d1"
now running
>xml_find_all(doc, "//d1:t") #finds the text nodes!
After numerous solutions, the one which works for all my chat-api, image-api, using spring-ai :
Try version 3.2.3 of SpringBoot,After modification, I can start
https://github.com/spring-projects/spring-ai/issues/683
Try using "ALT + Mouse drag" (hold ALT while selecting with the left mouse button) it will select without empty lines.
In the end I solved this by using a Shell Script,
I looked all around in the docs, and for StarRocks there is not such a thing as making UDF's or Procedures to ALTER TABLES, also no cursor/recursive functions.
Has anybody implemented same thing in LangGraph?
I want to export the verbose to a variable. Currently initialising the llm with the arguments prints the verbose in the command line.
migrations.AlterField here you can check what the default value is. The script may be reading this migration file raising the error before the change to the proper default. I have ran into this issue previously, you can either manually change the values or delete the whole file and run it again.Source: Previously had this issue.
Thanks Leon Unfortunately it still doesn't work, I inserted in the Manifest:
<uses-permission android:name="android.Manifest.permission.WAKE_LOCK"/>
<receiver android:name=".MyReceiver" android:exported="true" android:enabled="true">
<intent-filter>
<action android:name="com.google.firebase.MESSAGING_EVENT" />
</intent-filter>
</receiver>
I made the receiver like this:
[BroadcastReceiver(Name= "MyReceiver", Enabled = true,Exported =true)]
[IntentFilter(["com.google.firebase.MESSAGING_EVENT"])]
public class MyReceiver : BroadcastReceiver
{
public override void OnReceive(Context? context, Intent? intent)
{
if (intent != null && context!=null)
{
Intent serviceIntent = new(context, typeof(NotificationMessagingService));
if (Build.VERSION.SdkInt >= BuildVersionCodes.O)
{
Android.App.Application.Context.StartForegroundService(serviceIntent);
}
else
{
Android.App.Application.Context.StartService(serviceIntent);
}
Intent main = new(context, typeof(MainActivity));
context.StartActivity(main);
}
}
}
I also tried to insert the full name in the Receiver name, with no success.
The Messages I send are of this type:
Message message = new()
{
Data = new Dictionary<string, string>()
{
{"xxx","xxx"},
{"yyy","yyy"}
},
Topic = "gggg'
};
Do you have any other suggestions?
Tanks.
When the configs are managed from Ambari UI., Here are the steps.
From
Ambari -> YARN -> Configs -> Adavcned Configs -> Advanced container-executor
The default values are
banned.users=hdfs,yarn,mapred,bin
Remove the hdfs user and other user required.
I prefer to remove hdfs and hive
Followed by restart of YARN service, should fix the values.
@Jalpa, is the middleware AND ErrorBoundary needed to be able to handle all unhandled exceptions? e.g., does this depend on the render mode? does it handle circuit errors (eg, temporary & full Blazor disconnects), errors in razor components, errors in controllers, errors in the DB?