Use git-credential-manager
brew install --cask git-credential-manager
When you use git next time, it will automatically ask to open as to open in a web browser and authenticate you. You dont have to worry about configuring permissions yourself.
It looks like the root cause is below
https://issuetracker.google.com/issues/36934789
https://issuetracker.google.com/issues/37063737
https://issuetracker.google.com/issues/36926748
None of them fixed by google (API-35)
Natknąłem się na ten sam problem w mojej aplikacji po podbiciu springa.
W wersji tomcat 10.1.42 udało mi się rozwiązać ten problem za pomocą zmiennej w application.properties:
server.tomcat.max-part-count=30
I am facing the same issue while pushing my node project, what should i do?
A few things that might help others (including me) assist you better:
1. What Node.js library are you using for NFC communication? (e.g., `nfc-pcsc`)
2. Can you share the APDU commands you’re using to write `PWD`, `PACK`, and `AUTH0`?
3. Is the tag already locked when you try to write?
4. Are you getting any specific error codes or responses from the reader?
Also, have you checked the NTAG213 datasheet? The protection config pages are usually between E3 to E6, and AUTH0 sets the page where protection begins.
If you share some code, I can try to debug or help further.
if you are using emulator, try to run your code on a physical device. I solved my issue in this way.
Add /a/*
to the .gitignore I'd assume. You can ex do /a/*.txt
for just text files.
I had a similar issue that was caused by including shell completions in my .zprofile
. I specifically had:
if command -v ngrok &>/dev/null; then
eval "$(ngrok completion)"
fi
This is an issue because .zprofile
runs prior to Zsh setting up completions. Moving this logic to .zshrc
solved this for me.
It looks like the edited version was overwritten or misnamed, so let's rename the correct existing file
source_path = "/mnt/data/A_digital_illustration_in_retro-futurism_and_anime.png"
dest_path = "/mnt/data/Crash_to_rewind_final.png"
copyfile(source_path, dest_path)
This seems like it would be a simple thing to do, but unfortunately it’s something we don’t support and have a Defect on our backlog to fix this.
While there is not a way to add a Note and have the checkbox appear checked, you can send <ActExpDt>2300-12-30</ActExpDt> to make it not expire, but the checkbox still won’t show as checked.
Everyone -
Here's what I've done, and it seems to have worked.
Took the "meat" of the script (i.e. that which was posted), created a brand new file, then ran the script. Success. I don't know why, other than somehow/maybe, non-visible characters were included in original, and now are no longer exist.
Appreciate everyone who tried. Thanks!
What to do when such a mistake occurs?
MissingPackageManifestError: Could not find one of 'package.json' manifest files in the package
For the inner-box class add the position styles
.inner-box {
flex: 1;
background-color: transparent;
border-radius: 8px;
border: 2px solid white;
position: relative;
top: -8px;
}
This can be achieved by adding the css property break-inside to the child items.
Adding the following to the css in your codepen achieves the desired effect:
.sub-menu{
break-inside: avoid;
}
Most of the issues come from trying to model the 3d situation with a 1d potential well.
Some notes
Your model is assuming(and enforcing)spherical symmetry, most of the real wave functions do not have spherical symmetry, instead involving spherical harmonic functions to parameterize spherical dependency.
Some of your 1d eigenfunctions are antisymmetric, f(-x)=-f(x). These are totally invalid as potential solutions, where we are trying to pretend x is a radius and we have spherical symmetry, and need to be excluded.
Your lowest state is totally artificial, without your registration/softening at 0 it would have negative infinite energy and be infinitely thin (this couldn't be solved). The value of this is totally a result of softening and resolution.
You may need to reconsider the 1d approach. Or introduce something to account for deviations from spherical symmetry.
// login_screen.dart import 'package:flutter/material.dart'; import 'package:firebase_auth/firebase_auth.dart';
class LoginScreen extends StatefulWidget { @override _LoginScreenState createState() => _LoginScreenState(); }
class _LoginScreenState extends State<LoginScreen> { final _phoneController = TextEditingController(); final _otpController = TextEditingController(); final FirebaseAuth _auth = FirebaseAuth.instance;
String _verificationId = ''; bool _otpSent = false;
void _sendOTP() async { await _auth.verifyPhoneNumber( phoneNumber: '+91' + _phoneController.text.trim(), verificationCompleted: (PhoneAuthCredential credential) async { await _auth.signInWithCredential(credential); }, verificationFailed: (FirebaseAuthException e) { ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Verification failed: ${e.message}'))); }, codeSent: (String verificationId, int? resendToken) { setState(() { _verificationId = verificationId; _otpSent = true; }); }, codeAutoRetrievalTimeout: (String verificationId) { _verificationId = verificationId; }, ); }
void _verifyOTP() async { PhoneAuthCredential credential = PhoneAuthProvider.credential( verificationId: _verificationId, smsCode: _otpController.text.trim(), ); await _auth.signInWithCredential(credential); }
@override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text('Login via OTP')), body: Padding( padding: const EdgeInsets.all(16.0), child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ TextField( controller: _phone
Controller,
Does this help you?
import unittest
from unittest.mock import MagicMock, patch
class TestGatttoolInteraction(unittest.TestCase):
"""
Test suite for the gatttool interaction functions.
Mocks pexpect.spawn to avoid actual external process calls.
"""
@patch('pexpect.spawn') # Patch pexpect.spawn globally for all tests in this class
def test_connect_success(self, mock_spawn):
"""
Tests that the connect function correctly spawns gatttool,
sends the connect command, and expects success.
"""
# Configure the mock pexpect.spawn object.
# When mock_spawn() is called, it returns a mock object (mock_gatt).
mock_gatt = MagicMock()
mock_spawn.return_value = mock_gatt
# Simulate gatttool's response to the connect command.
# When mock_gatt.expect is called, we want it to succeed without error.
# We don't need to specify return value for expect if it just needs to not raise.
mock_gatt.expect.return_value = 0 # A common return for success in pexpect
# Call the function under test
connect()
# Assertions to verify correct behavior:
# 1. Verify pexpect.spawn was called with the correct command
mock_spawn.assert_called_once_with("sudo gatttool -I")
# 2. Verify the correct connect command was sent
mock_gatt.sendline.assert_any_call("connect " + MAC)
# 3. Verify 'Connection successful' was expected
mock_gatt.expect.assert_called_once_with("Connection successful")
# 4. Verify the global 'gatt' variable was set to the mock object
self.assertIs(gatt, mock_gatt)
@patch('pexpect.spawn')
def test_connect_failure(self, mock_spawn):
"""
Tests that connect raises an exception if 'Connection successful'
is not found, simulating a connection failure.
"""
mock_gatt = MagicMock()
mock_spawn.return_value = mock_gatt
# Configure expect to raise an exception, simulating failure to find expected text
mock_gatt.expect.side_effect = pexpect.exceptions.TIMEOUT('Timeout waiting for "Connection successful"')
# Assert that the function raises the expected pexpect exception
with self.assertRaises(pexpect.exceptions.TIMEOUT):
connect()
mock_spawn.assert_called_once_with("sudo gatttool -I")
mock_gatt.sendline.assert_called_once_with("connect " + MAC)
mock_gatt.expect.assert_called_once_with("Connection successful")
@patch('pexpect.spawn') # Patch pexpect.spawn for this test
def test_send_success(self, mock_spawn):
"""
Tests that the send function correctly calls connect(),
sends the char-write-req command, and expects success.
"""
mock_gatt_instance = MagicMock()
# Ensure that each call to pexpect.spawn() (e.g., from connect())
# returns the same mock object in this test context.
mock_spawn.return_value = mock_gatt_instance
# Simulate successful responses for both connect() and send() operations
# expect() should not raise an error
mock_gatt_instance.expect.return_value = 0
test_value = "ab"
send(test_value)
# Assertions:
# 1. Verify pexpect.spawn was called twice (once by connect() inside send(), then again by connect() in the second call)
# However, because send() calls connect() which *re-spawns* and overwrites 'gatt',
# we only care about the state after the final connect().
# The important thing is that gatt.sendline and gatt.expect are called correctly on the *final* gatt object.
# Since connect() is called, pexpect.spawn will be called once.
mock_spawn.assert_called_once_with("sudo gatttool -I")
# 2. Verify the connect command was sent by connect()
mock_gatt_instance.sendline.assert_any_call("connect " + MAC)
# 3. Verify 'Connection successful' was expected by connect()
mock_gatt_instance.expect.assert_any_call("Connection successful")
# 4. Verify the characteristic write command was sent
expected_write_command = f"char-write-req 0x000c {test_value}0"
mock_gatt_instance.sendline.assert_any_call(expected_write_command)
# 5. Verify 'Characteristic value was written successfully' was expected
mock_gatt_instance.expect.assert_any_call("Characteristic value was written successfully")
# Ensure gatt.sendline and gatt.expect were called for both operations
# We use call_count to ensure both connect and send operations occurred on the mock
self.assertEqual(mock_gatt_instance.sendline.call_count, 2)
self.assertEqual(mock_gatt_instance.expect.call_count, 2)
@patch('pexpect.spawn')
def test_send_write_failure(self, mock_spawn):
"""
Tests that send raises an exception if 'Characteristic value was written successfully'
is not found, simulating a write failure.
"""
mock_gatt_instance = MagicMock()
mock_spawn.return_value = mock_gatt_instance
# Set up expect to succeed for the 'connect' call
mock_gatt_instance.expect.side_effect = [
0, # Success for "Connection successful"
pexpect.exceptions.TIMEOUT('Timeout waiting for "Characteristic value was written successfully"') # Failure for the write
]
test_value = "1a"
with self.assertRaises(pexpect.exceptions.TIMEOUT):
send(test_value)
mock_spawn.assert_called_once_with("sudo gatttool -I")
mock_gatt_instance.sendline.assert_any_call("connect " + MAC)
mock_gatt_instance.expect.assert_any_call("Connection successful")
mock_gatt_instance.sendline.assert_any_call(f"char-write-req 0x000c {test_value}0")
mock_gatt_instance.expect.assert_any_call("Characteristic value was written successfully")
self.assertEqual(mock_gatt_instance.sendline.call_count, 2)
self.assertEqual(mock_gatt_instance.expect.call_count, 2)
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False) # exit=False prevents sys.exit()
If only you are working on the branch you can do:
See commit history,reflog : git reflog
Reset to old commit : git reset --hard <old-sha>
Force push to remote: git push --force origin <branch-name>
Something like =MAX(FILTER(B:B,A:A="apples"))
should work.
(Note that, unless the cell with this formula is formatted for Dates, this will show as a number, instead of a date, that can then be formatted as a date by right-clicking->Format Cells...->Date.)
Something was wrong with virtual environment, I deleted it and created once again, installed just Flask, and debug modes works fine. Problem solved.
Did you find the alternative?, cause im having the same problem
I faced a vague rejection message from Play Store too. https://playtrust.pulsecode.in gave a clear checklist of what to fix. Fixed them and resubmitted — got accepted!
I always use the IDE that I'm currently working with. For instance, if I'm using VSCode to write Vue code, I prefer to keep everything within that environment.
Using an IDE like VSCode can enhance the way you write code due to features like autocompletion, tips, and more. I enjoy using VSCode for frontend development, and I believe it's more of a personal preference than the "right way to do things."
If you're undecided about which IDE to choose, I recommend sticking with VSCode; it's excellent for beginners.
Note that the accepted answer seems to be bullshit AI slop. django_cgroup
does not exist and a Google search only links to this post.
Modified my /app/_layout.tsx - removed the slot and added the route for the (tabs) ... that seemed to work.
<AuthProvider>
<Stack>
<Stack.Screen name="(tabs)" options={{ headerShown: false }} />
</Stack>
</AuthProvider>
I've worked on the exact same project with DQN and can offer some insights. I'm typically able to achieve an average reward of 490+ over 100 consecutive episodes, well within a 500-episode training limit. Here's my analysis of your setup.
(A quick note: I can't comment on the hard update part specifically, as I use soft updates, but I believe the following points are the main bottlenecks.)
We generally think a large replay buffer leads to a more uniform sample distribution, which is true to an extent. Even with a FIFO (First-In, First-Out) principle, the distribution remains stable.
However, this comes with significant risks:
It accumulates too many stale experiences. When your model samples from the buffer to learn, it's overwhelmingly likely to draw on old, outdated samples. This severely hinders its ability to learn from recent, more relevant experiences and thus, to improve.
It introduces significant feedback delay. When your target network updates, it immediately collects new experiences from the environment that reflect its current policy. These new, valuable samples are then added to the replay buffer, but they get lost in the vast sea of older experiences. This prevents the model from quickly understanding whether its current policy is effective.
In my experience, a buffer size between 1,000 and 5,000 is more than sufficient to achieve good results in this environment.
Generally, a larger batch size provides a more stable and representative sample distribution for each learning step. Imagine if your batch size was 1; the quality and variance of each sample would fluctuate dramatically.
With a massive replay buffer of 100,000, sampling only 32 experiences per step is highly inefficient. Your model has a huge plate of valuable data, but it's only taking tiny bites. This makes it very difficult to absorb the value contained within the buffer.
A good rule of thumb is to scale your batch size with your buffer size. For a buffer of 1,000, a batch size of 32 is reasonable. If you increase the buffer to 2,000, consider a batch size of 64. For a 5,000-sized buffer, 128 could be appropriate. The ratio between your buffer (100,000) and batch size (32) is quite extreme.
The standard for this environment is typically a maximum of 500 steps per episode, after which the episode terminates.
I noticed you set this to 100,000. This is an incredibly high value and makes you overly tolerant of your agent's failures. You're essentially telling it, "Don't worry, you have almost infinite time to try and balance, just get me that 500 score eventually." A stricter termination condition provides a clearer, more urgent learning signal and forces the agent to learn to achieve the goal efficiently.
I stick to the 500-step limit and don't grant any extensions. I expect the agent to stay balanced for the entire duration, or the episode ends. Trust me, the agent is capable of achieving it! Giving it 100,000 steps might be a major contributor to your slow training (unless, of course, your agent has actually learned to survive for 100,000 steps, which would result in-game-breakingly high rewards).
I use only two hidden layers (32 and 64 neurons, respectively), and it works very effectively. You should always start with the simplest possible network and only increase complexity if the simpler model fails to solve the problem. Using 10 hidden layers for a straightforward project like CartPole is excessive.
With so many parameters to learn, your training will be significantly slower and much harder to converge.
Your set of hyperparameters is quite extreme compared to what I've found effective. I'm not sure how you arrived at them, but from an efficiency standpoint, it's often best to start with a set of well-known, proven hyperparameters for the environment you're working on. You can find these in papers, popular GitHub repositories, or tutorials.
You might worry that starting with a good set of hyperparameters will prevent you from learning anything. Don't be. Due to the stochastic nature of RL, even with identical hyperparameters, results can vary based on other small details. There will still be plenty to debug and understand. I would always recommend this approach to save time and avoid unnecessary optimization cycles.
This reinforces a key principle: start simple, then gradually increase complexity. This applies to your network architecture, buffer size, and other parameters.
Finally, I want to say that you've asked a great question. You provided plenty of information, including your own analysis and graphs, which is why I was motivated to give a detailed answer. Even without looking at your code, I believe your hyperparameters are the key issue. Good luck!
I cannot say for certain what the reasoning was behind the depreciation, but seeing as clEnqueueBarrierWithWaitList()
was added at the same time, it was likely just renamed to clean up the API and avoid confusion with clWaitForEvents()
. The only difference between clEnqueueBarrierWithWaitList()
and clEnqueueWaitForEvents()
that I can see is that clEnqueueBarrierWithWaitList()
adds the ability to create an event that allows querying the status of the barrier.
I have recently been working on something similar, and while I know that this is an old post I thought I should post the solution that I came to. I have found that geom_pwc()
does this and just works.
As an example using the ToothGrowth dataset:
ggboxplot(ToothGrowth,
x = "dose",
y = "len",
color = "dose",
palette = "jco",
add = "jitter",
facet.by = "supp",
short.panel.labs = FALSE)+
geom_pwc(method = "wilcox.test",
label = "p.signif",
hide.ns = TRUE)
In my case, the same issue was due to using System.Text.Json v9.0.0 together with .NET 6.
I managed to solve this by downgrading System.Text.Json to v8.0.5, which is non-vulnerable, non-deprecated as of June 2025.
If you have the possibility to do so, though, it would be better to upgrade the target framework to .NET 8 or later and that would solve the issue as well.
data modify
{"status":400,"headers":{},"requestID":null,"error":{"message":"Malformed input request: #: subject must not be valid against schema {"required":["messages"]}#/messages/1/content: expected minimum item count: 1, found: 0#/messages/1/content: expected type: String, found: JSONArray, please reformat your input and try again."}}
Try turning off the HUD. It can cause problems with some sites.
well I resorted back to using my phone to preview my apps. I still have my virtual device which I recently opened but it sometimes bundles slow
I encountered a similar error when doing this test scenario (which worked in spring boot 3.2.5 but not anymore in spring boot 3.5.2):
@SpringBootTest
@AutoConfigureMockMvc
class DefaultApiSecurityTest {
@Autowired private WebApplicationContext context;
private MockMvc mvc;
@BeforeEach
public void init() {
this.mvc = MockMvcBuilders.webAppContextSetup(context).apply(springSecurity()).build();
}
@Test
void accessToPublicRoutesAsAnonymousShouldBeGranted() throws Exception {
this.mvc.perform(MockMvcRequestBuilders.get("/v3/api-docs")).andExpect(status().isOk());
}
}
The solution was to follow https://stackoverflow.com/a/79322542/7059810, maybe the problem here was similar where the update ended up making the test scenario call a method which was now returning a 500 error.
LLVM team confirms that this is a compiler bug: see https://github.com/llvm/llvm-project/issues/145521
To expand on @Skenvy's answer, if the check that you want to rerun uses a matrix to run multiple variations, the list of check runs from the github api used in the "Rerequest check suite" step will have a different entry for each variation with different names but the same check id. To handle this case, we need to filter the output of that api call by checks whose name starts with JOB_NAME
(instead of matching exactly) and then get the unique values so the same ID doesn't get retriggered multiple times, which causes the "Rerequest check suite" step to fail.
Here's an updated jq
line you should use in the "Get check run ID" step that will do this:
jq '[.check_runs[] | select(.name | startswith("${{ env.JOB_NAME }}")) | select(.pull_requests != null) | select(.pull_requests[].number == ${{ env.PR_NUMBER }}) | .check_suite.id | tostring ] | map({(.):1}) | add | keys_unsorted[] | tonumber'
You need to use project
before the render to select just the columns you want to appear in the chart
| project timestamp, duration
so in full would be:
availabilityResults
| where timestamp > ago(24h) //set the time range
| where name == "sitename" //set the monitor name
| project timestamp, duration
| render areachart with (xcolumn=duration,ycolumns=timestamp)
I have run into this more than once. Closing Visual Studio down, delete bin, obj, and vsn folders. Restart and it works.
yes, you can set enabled to false and done!
You data is in the multibyte encoding UTF-8 without a BOM. Encoding sent by the last is windows-1252 so you can see 3 bytes for some symbols.
There is double quotes (") unibyte, open double quote (“) and close double quote (”) that require 3 bytes. There is a lot of others like ', ‘ and ’.
Your code is invalid Fortran, so a Fortran processor can do anything it wants. This includes doing what you expect or deleting your filesystems.
Fortran 2023, page 163
10.1.5.2.4 Evaluation of numeric intrinsic operations
The execution of any numeric operation whose result is not defined by the arithmetic used by the processor is prohibited.
The prohibition is not a number constraint. A Fortran processor need not catch or issue an error or warning. The prohibition is on the programmer.
From version 6.10, you can use the SearchField component :
https://doc-snapshots.qt.io/qt6-6.10/qml-qtquick-controls-searchfield.html
It is likely my companies hook that adds a prefix to the commit message.
Issue fixed after created a simple file (index.html, for example). The import doesn't work with empty repository.
For some reason, no errors are generated when using wild cards in the path of Copy-Item
(as @mklement0 states in the comments). Using the Filter
param instead should bypass this behavior.
Copy-Item "$UpdateSourceFolder\" -Filter * "$UpdateDestinationFolder" `
-Recurse -ErrorAction Stop
If still needed, I have written this script to produce a new Hyperv VM from existing one (which acts as a template): https://github.com/ageoftech/hyperv-vm-builder
Polars is actually perfect for this because its similar to pandas and it allows you to do lazy evaluation of your data/queries , and if i remember correctly they have GPU support in beta currently .
Multicriteria objectives are for linear and integer problems only.
If you have several criteria and at least one that is quadratic:
you could minimize the first one, get a solution with crit1=value1, then add a constraint to force it to be optimal or good enough (crit1 to be <= value1 + epsilon), and optimize the second criterion, and so on.
or you could use piecewise linear approximations instead of the quadratic terms.
(Of course, in your example, there is a single criterion, so no need to use a multicriteria objective, just remove "staticLex")
Just compare it with a not existing variable ({NULL}) to compare it with null:
<f:if condition="{user.usergroup} != {NULL}">
As far as I can tell, this just doesn't work. I've switched to using yet another cloudwatch exporter, which succeeds at this.
The best case for you is the last option, a custom "x-axis", but to resolve the size of the column go to 'Customize' find 'Stacked Style' and change it to 'Stack'
Thank you good answer but i can't find how to recolor text from sender and receiver to different colors. I see props for bg but not for text.
markdown: {
text: {
color: theme.newTheme.textIcon_Inverse,
fontSize: 18,
fontWeight: 400,
lineHeight: 20,
}
},
receiverMessageBackgroundColor: theme.newTheme.backgroundInverse,
senderMessageBackgroundColor: theme.newTheme.backgroundTertiary,
but i try messageUser and it's not working
What of those using expo image picker which is the same as android photo picker, am still stuck in this issue for offer two weeks now and is really really frustrating
Check if you have the correct java version set globally or not . It might be due to the java version being different in the place you are running the mvn commands.
I think it is because you are not remembering the refresh state.
At the top of the function MyScreen add the value
val state = rememberPullToRefreshState()
Now when you pull to refresh it should be aware of the state
First of all, you do not need 2 script for 2 different scenarios.
You've mentioned that you've used C++ and Java but the errors were simple and easy to solve.
You've used wrongly myAge variable and did not use the same in 2 different if condition.
I would suggest please use codepen and work on different javascript tutorials.
Thank you.
var yourName = prompt("What is your name?");
var myAge = prompt("What is your age");
if (yourName != null) {
document.getElementById("sayHello").innerHTML = "Hello " + yourName;
} else {
alert("Please enter your name correctly");
}
if (myAge < 4) {
document.write("You should be in preschool");
}
else if (myAge > 4 && myAge < 18) {
document.write("You should be in public private school");
} else if (myAge > 18 && myAge < 24) {
document.write("You should be in college");
} else {
document.write("you're in the work force now");
}
body {
font-size: 1.6em;
}
.hidden {
display: none;
}
.show {
display: inline !important;
}
button {
border: 2px solid black;
background: #E5E4E2;
font-size: .5em;
font-weight: bold;
color: black;
padding: .8em 2em;
margin-top: .4em;
}
<p id="sayHello"></p>
Have you used the sinc interpolation method to solve differential equations? I need help
The way batch_size works is still hard to predict without digging through the source code, which I try to avoid at the moment. If I supply 63 configurations, each resampled three times, the result is a total of 189 iterations. The Terminator is none, and I'm calling this job on 30 cores. If par batch_size
determines exactly how many configurations are evaluated in parallel, then setting it to a value of 50, e.g., should divide jobs into four batches. When I call this, the returned info says that I actually have two batches, each evaluating a 33/31 configuration, 96/93 resamplings. Any other batch_size
also leads to an unpredictable split of iterations. How does this load balancing actually work?
tune(
task = task,
tuner = tnr("grid_search", batch_size = 50),
learner = lrn("regr.ranger", importance = "permutation", num.threads = 8),
resampling = rsmp("cv", folds = 3),
measures = msr("regr.mae"),
terminator = trm("none"),
search_space = ps(
num.trees = p_fct(seq(100, 500, 50)),#9
mtry = p_fct(seq(3, 9, 1))#7
)
To handle Shopify subscription properly, you will need to store Shopify subscription data in the database, including started_at, status, etc.
PDF function (experimental) in PowerApps can be used to generated a pdf. However it does not support Maps, embedded PowerBI's and nested galleries. Guess that could be incorporated in the ppt?
this work on me
sudo apt install libpcre3-dev
Based on the information provided here I was unable to reproduce the issue using this data and the code below. Please provide a MWE which reproduces the issue. For future reference, SimpleITK/ITK have a dedicated discourse forum.
import dicom2nifti
import SimpleITK as sitk
import os
import time
dicom_folder_path = "./single_series_CIRS057A_MR_CT_DICOM"
nifti_output_path = "./result.nii.gz"
dicom_output_dir = "./result"
dicom2nifti.dicom_series_to_nifti(dicom_folder_path, nifti_output_path, reorient_nifti=False)
image = sitk.ReadImage(nifti_output_path, outputPixelType=sitk.sitkFloat32)
# List of tag-value pairs shared by all slices
modification_time = time.strftime("%H%M%S")
modification_date = time.strftime("%Y%m%d")
direction = image.GetDirection()
series_tag_values = [
("0008|0031", modification_time), # Series Time
("0008|0021", modification_date), # Series Date
("0008|0008", "DERIVED\\SECONDARY"), # Image Type
(
"0020|000e",
"1.2.826.0.1.3680043.2.1125." + modification_date + ".1" + modification_time,
), # Series Instance UID
(
"0020|0037",
"\\".join(
map(
str,
(
direction[0],
direction[3],
direction[6],
direction[1],
direction[4],
direction[7],
),
)
),
), # Image Orientation
# (Patient)
("0008|103e", "Created-SimpleITK"), # Series Description
]
# Write floating point values, so we need to use the rescale
# slope, "0028|1053", to select the number of digits we want to keep. We
# also need to specify additional pixel storage and representation
# information.
rescale_slope = 0.001 # keep three digits after the decimal point
series_tag_values = series_tag_values + [
("0028|1053", str(rescale_slope)), # rescale slope
("0028|1052", "0"), # rescale intercept
("0028|0100", "16"), # bits allocated
("0028|0101", "16"), # bits stored
("0028|0102", "15"), # high bit
("0028|0103", "1"),
] # pixel representation
writer = sitk.ImageFileWriter()
writer.KeepOriginalImageUIDOn()
for i in range(image.GetDepth()):
slice = image[:, :, i]
for tag, value in series_tag_values:
slice.SetMetaData(tag, value)
# slice origin and instance number are unique per slice
slice.SetMetaData(
"0020|0032",
"\\".join(map(str, image.TransformIndexToPhysicalPoint((0, 0, i)))),
)
slice.SetMetaData("0020|0013", str(i))
writer.SetFileName(os.path.join(dicom_output_dir, f"{i+1:08X}.dcm"))
writer.Execute(slice)
Had the same issue. Worked around it by using the Angular app template (without ASP core) and creating a second project with the ASP core API template.
Apparently only the Angular app template is updated.
Increase the timeout to a higher value
If this Helps anyone, i updated my prisma version with the latest. and it worked fine
https://graph.microsoft.com/v1.0/sites/{siteId}/drive/root:/directoryName1/directoryName2:/children?search(q='Data')
To directly search within that specific path, you should remove the /children
segment and use the /search
endpoint like this:
It will return all files in the specified directory, and then apply the search filter
thank you for you answers.
I followed the approach in my [edit 1] proposal and came up with the following.
# FROM
packages_to_dl = [
{ "part": "file_1.7z.001" },
{ "part": "file_1.7z.xxx" },
{ "part": "file_N.7z.001" },
{ "part": "file_N.7z.xxx" },
]
# TO
packages_to_dl = [
[ "file_1.7z.001", "file_1.7z.xxx" ],
[ "file_N.7z.001", "file_N.7z.xxx" ],
]
async def download(self, packages_to_dl: list) -> None:
for idx, packages in enumerate(packages_to_dl):
if idx == 0:
""" Download batch of parts """
async with asyncio.TaskGroup() as tg:
[
tg.create_task(
coro=self.download_from_gitlab(
url,
output_document,
)
) for x in packages
]
if idx != 0:
async with asyncio.TaskGroup() as tg:
""" Download idx parts... """
[
tg.create_task(
coro=self.download_from_gitlab(
url,
output_document
)
) for x in packages
]
""" ... While extracting idx-1 parts """
args = [
'x',
packages_to_dl[idx-1][0],
save_dir
]
tg.create_task(
coro=self.extract(
"7z",
args
)
)
""" Once the loop is done, extract last batch of parts """
args = [
'x',
packages_to_dl[-1][0],
save_dir
]
await self.extract("7z", args)
async def download_from_gitlab(self, url: str, output_document: str, limiter=2) -> None:
async with asyncio.Semaphore(limiter): # download parts 2/2 by default
async with self._session.get(url=url) as r:
with open(output_document, "wb") as f:
chunk_size = 64*1024
async for data in r.content.iter_chunked(chunk_size):
f.write(data)
async def extract(self, program: str, args: list[str]) -> None:
proc = await asyncio.create_subprocess_exec(
program,
*args,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
await proc.communicate()
print(f'{program} {" ".join(args)} exited with {proc.returncode}')
Cheers,
Absolutely — here’s a clear and professional version of your question in English that you can use to ask for help on platforms like Stack Overflow, GitHub Discussions, or Reddit:
Title: Why is Sinc-Interpolation with Double Exponential Transform not showing exponential convergence?
Body: Hello everyone, I’m working on numerically solving the boundary value problem:
u′′(x)−u(x)=sin(πx),x∈[−1,1],u(−1)=u(1)=0u''(x) - u(x) = \sin(\pi x), \quad x \in [-1,1], \quad u(-1) = u(1) = 0
I'm applying Sinc-Interpolation with the Double Exponential (DE) transformation as described in Stenger's method. I construct the second derivative matrix D(2)D^{(2)} in the tt-domain, then transform it to the xx-domain using:
Dx(2)=diag(1ϕ′(tk)2)⋅Dt(2)−diag(ϕ′′(tk)ϕ′(tk)3)⋅Dt(1)D^{(2)}_x = \text{diag}\left( \frac{1}{\phi'(t_k)^2} \right) \cdot D^{(2)}_t - \text{diag}\left( \frac{\phi''(t_k)}{\phi'(t_k)^3} \right) \cdot D^{(1)}_t
I solve the linear system
(Dx(2)−I)u=f(D^{(2)}_x - I) u = f
after applying Dirichlet boundary conditions at x=±1x = \pm1. The exact solution is known and smooth:
u(x)=−1π2+1sin(πx)u(x) = -\frac{1}{\pi^2 + 1} \sin(\pi x)
However, even after increasing NN up to 50 or more, I’m not seeing exponential decay in the maximum error. The error seems to flatten out or decrease very slowly. I suspect a subtle mistake is hiding in my implementation — either in the transform, the derivative matrices, or the collocation formulation.
Any ideas on what I might be missing? Has anyone implemented Sinc collocation with DE and observed similar issues?
Thank you in advance!
If you want, I can even translate it into French or format it as a GitHub issue. Just let me know how you'd like to post it ✨
I use Config.Image
it's a field of the container's inspect output plus jq
to parse the JSON output :
docker inspect <container-id/name> | jq -r '.[0].Config.Image'
According to the documentation your code is lacking the .listStyle(.insetGrouped) modifier on the list as follows:
List {
// (...)
}
.listStyle(.insetGrouped)
This blog is informational.
IF YOU WANT TO MAKE ONLINE EARNING
Visit our website : https://www.playmaxx.club/
TRUSTED WORK
solution
Just enable "Delegate IDE build/run actions to Maven" option in Maven -> Runner
This solved my problem after long hours of different tries, please see the picture above.
A colleague worked on this issue and he used the percentage values instead of the raw values in the data given to the chart. This fixed the issue !
Unfortunately, I do not work on the project anymore so I cannot try what kikon and oelimoe suggest in comments.
Below is the distilled “field-notes” version, with the minimum set of changes that finally made web pages load on both Wi-Fi and LTE while still blocking everything that isn’t on the whitelist.
Option | What it does ? | Effort | Battery |
---|---|---|---|
DNS-only allow-list (recommended) | Let Android route traffic as usual, but fail every DNS lookup whose FQDN is not on your list. | Minimal | Minimal |
Full user-space forwarder | Suck all packets into the TUN, recreate a TCP/UDP stack in Kotlin, forward bytes in both directions. | Maximum | Maximum |
Unless you need DPI or per-packet accounting, stick to DNS filtering first. You can always tighten the net later.
class SecureThread(private val vpn: VpnService) : Runnable {
private val dnsAllow = hashSetOf(
"sentry.io", "mapbox.com", "posthog.com", "time.android.com",
"fonts.google.com", "wikipedia.org"
)
private lateinit var tunFd: ParcelFileDescriptor
private lateinit var inStream: FileInputStream
private lateinit var outStream: FileOutputStream
private val buf = ByteArray(32 * 1024)
// Always use a public resolver – carrier DNS often hides behind 10.x / 192.168.x
private val resolver = InetSocketAddress("1.1.1.1", 53)
override fun run() {
tunFd = buildTun()
inStream = FileInputStream(tunFd.fileDescriptor)
outStream = FileOutputStream(tunFd.fileDescriptor)
val dnsSocket = DatagramSocket().apply { vpn.protect(this) }
dnsSocket.soTimeout = 5_000 // don’t hang forever on bad networks
while (!Thread.currentThread().isInterrupted) {
val len = inStream.read(buf)
if (len <= 0) continue
val pkt = IpV4Packet.newPacket(buf, 0, len)
val udp = pkt.payload as? UdpPacket ?: passthrough(pkt)
if (udp?.header?.dstPort?.valueAsInt() != 53) { passthrough(pkt); continue }
val dns = Message(udp.payload.rawData)
val qName = dns.question.name.toString(true)
if (dnsAllow.none { qName.endsWith(it) }) {
// Synthesize NXDOMAIN
dns.header.rcode = Rcode.NXDOMAIN
reply(pkt, dns.toWire())
continue
}
// Forward to 1.1.1.1
val fwd = DatagramPacket(udp.payload.rawData, udp.payload.rawData.size, resolver)
dnsSocket.send(fwd)
val respBuf = ByteArray(1500)
val respPkt = DatagramPacket(respBuf, respBuf.size)
dnsSocket.receive(respPkt)
reply(pkt, respBuf.copyOf(respPkt.length))
}
}
/* - helpers -*/
private fun buildTun(): ParcelFileDescriptor =
vpn.Builder()
.setSession("Whitelist-DNS")
.setMtu(1280) // safe for cellular
.addAddress("10.0.0.2", 24) // dummy, but required
.addDnsServer("1.1.1.1") // force all lookups through us
.establish()
private fun passthrough(ip: IpV4Packet) = outStream.write(ip.rawData)
private fun reply(request: IpV4Packet, payload: ByteArray) {
val udp = request.payload as UdpPacket
val answer =
UdpPacket.Builder(udp)
.srcPort(udp.header.dstPort)
.dstPort(udp.header.srcPort)
.srcAddr(request.header.dstAddr)
.dstAddr(request.header.srcAddr)
.payloadBuilder(UnknownPacket.Builder().rawData(payload))
.correctChecksumAtBuild(true)
.correctLengthAtBuild(true)
val ip =
IpV4Packet.Builder(request)
.srcAddr(request.header.dstAddr)
.dstAddr(request.header.srcAddr)
.payloadBuilder(answer)
.correctChecksumAtBuild(true)
.correctLengthAtBuild(true)
.build()
outStream.write(ip.rawData)
}
}
No catch-all route ⇒ no packet loop. We don’t call addRoute("0.0.0.0", 0)
, so only DNS lands in the TUN.
Public resolver (1.1.1.1) is routable on every network. Carrier-private resolvers live behind NAT you can’t reach from the TUN.
NXDOMAIN instead of empty A-record. Browsers treat rcode=3
as “host doesn’t exist” and give up immediately instead of retrying IPv6 or DoH.
MTU 1280 keeps us under the typical 1350-byte cellular path-MTU (bye-bye mysterious hangs).
Keep a ConcurrentHashMap<InetAddress, Long>
of “known good” addresses (expires at TTL).
After you forward an allowed DNS answer, add every A/AAAA to the map.
Add addRoute("0.0.0.0", 0)
/ addRoute("::", 0)
and implement a proper forwarder:
UDP: create a DatagramChannel
, copy both directions.
TCP: socket-pair with SocketChannel
+ Selector
.
Drop any packet whose dstAddr !in allowedIps
.
That’s basically what tun2socks
, Intra
, and Nebula
do internally. If you don’t want to maintain your own NAT table, embed something like go-tun2socks
with JNI.
When you do block IPv6 queries, respond with an AAAA
that points to loopback:
dnsMsg.addRecord(
AAAARecord(
dnsMsg.question.name,
dnsMsg.question.dClass,
10,
Inet6Address.getByName("::1")
), Section.ANSWER
)
Chrome will happily move on to the next host in the alt-svc list.
DatagramSocket
– avoids lock contention in your executor.private val dnsSock = ThreadLocal.withInitial {
DatagramSocket().apply { vpn.protect(this) }
}
Timeouts everywhere – missing one receive()
call on cellular was what froze your first run.
Verbose logging for a day, then drop to WARN – battery thanks you.
Happy hacking!
Set the appropriate parameter in your message request:
collapseKey
on Android
apns-collapse-id
on Apple
Topic
on Web
collapse_key
in legacy protocols (all platforms)
is the reCaptcha issue resolved after downloading it from google play release?
Ensure the .m3u8
URL is valid uses HTTP and AVURLAsset
is loaded with AVAsset.loadValuesAsynchronously
before accessing tracks.
@JvdV, your solution works, but it also returns duplicate values. Would you please modify the formula to remove duplicate values.
Little "side effect hack": menu "Debug", "Attach to process" select process which not running under your credentials will cause restart VS in "admin mode".
You can do this via python an its
win32com.client
. This loop will print out all text from all slides of Test.pptx
.
import win32com.client
ppt_dir = r'C:\Users\Documents\Test.pptx'
ppt_app = win32com.client.GetObject(ppt_dir)
for ppt_slide in ppt_app.Slides:
for comment in ppt_slide.Comments:
for shape in ppt_slide.shapes:
print(shape.TextFrame.TextRange)
Result: This is a test
Take this as a starting point, depending on what you want to do with the extracted text.
Probably this is because trace information is not propagated. You can check out headers provided by producer side. If there are no trace headers, then check server side. If headers are supplied then it is consumer side problem.
Trace info propagation depends on interop mechanism. I mean that for REST it is one set of classes but for Kafka it is other set.
For Kafka e.g. Spring Boot 3 has observation turned off by default.
It can be turned on by these properties:
spring.kafka.template.observation-enabled=true for KafkaTemplate
spring.kafka.listener.observation-enabled=true for Listener
Or if you construct KafkaTemplate and/or ConcurrentKafkaListenerContainerFactory beans by yourself then you should set observation while configuring beans. Have a look at this article https://www.baeldung.com/spring-kafka-micrometer#2-propagating-the-context
For REST afaik there are no special properties and it should work out-of-the-box.
BTW, you no need to include micrometer-tracing dependency. It is transitive from micrometer-tracing-bridge-brave.
For those coming here on looking how to get the old type stack view...
So in the latest versions I think they've changed the stack view to something else , which I personally wouldn't prefer. The workaround is that -
Open r2 with a binary then save the current default layout with any name say "xyz" or any one of your saved layouts , It should be in your saved layouts. Now go to .local/share/radare2/r2panels, you can see the "xyz" config file there. Do this modification:
Change the stack Cmd from "xc 256@r:SP" to "pxr@r:SP" , which will get you that good old radare2 stack layout view.
I understand this is an old post, but I've come here looking for an answer.
I've been told it can be resolved with a Triplanar Shader. I am currently downloading a package which hopefully will show some results.
Is it possible to rotate the annotation? I have searched through documentation, gallery and answers here and I wasn't able to find any hint.
In my case rm -rf node_modules
and yarn install
were enough
The issue of the TextField being hidden behind the keyboard can be resolved by using .safeAreaInset(edge: .bottom) to place the input bar above the keyboard. Here’s a complete example that demonstrates how to achieve this in SwiftUI:
struct ChatView: View {
@State private var typedText: String = ""
var body: some View {
ScrollViewReader { scrollProxy in
ScrollView {
VStack(spacing: 8) {
ForEach(0..<20, id: \.self) { index in
Text("Message \(index)")
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.padding()
}
.safeAreaInset(edge: .bottom) {
inputBar
}
}
}
var inputBar: some View {
VStack(spacing: 0) {
Divider()
HStack {
TextField("Start typing here...", text: $typedText)
.textFieldStyle(RoundedBorderTextFieldStyle())
}
.padding()
.background(Color(UIColor.systemBackground))
}
}
}
Using .safeAreaInset(edge: .bottom) ensures the TextField is always displayed above the keyboard, respecting the safe area. This approach works reliably in iOS 15 and later.
Interesting fact about using TanStack Query within NextJS's AppRouter is that you need to prefetch all queries that you are using in client components.
If the query is dependant on some variable, which is not connected with query directly (e.g. its key), use imperative fetch (docs) via queryClient hook.
===
This was helpful resource too:
https://www.robinwieruch.de/next-server-actions-fetch-data/
Forget about DBeaver CE with fuzzy work with local client utilities
It should also be borne in mind that there are different compilers for the two controllers. The xc16 is used for the PIC24, while Microchip recommends the xc-DSC compiler for new projects with the dsPIC.
Use a proper Paper Size with the Same Aspect Ratio in CSS
To solve this problem, in CSS use a required paper size with the same aspect ratio, as shown below. This example is for A4 size.
@page {
/* A4 size (210mm × 297mm) scaled by 1.5x */
size: 315mm 445.5mm;
margin: 0;
}
Add some any Resolver like:
@Resolver()
export class AppResolver {
@Query(() => String)
hello(): string {
return 'Hello world!';
}
}
and add it to providers in your app.module:
providers: [AppResolver, AppService],
Diagnosed the forked package of typescript-Codegen and found that there was no code to handle ArrayBuffer responses, which was causing the issue
Turns out this simply needs vi.runAllTimersAsync()
instead. Then it works.
Of course stubbing setTimeout()
is also an option.
For additional information, as said by a Logstash maintainer (Source),
If I use Filebeat for collecting a particular kind of log file on all servers I'd use Filebeat everywhere instead of making an exception for the Logstash server(s) which theoretically wouldn't have needed Filebeat.The file input and Filebeat have slightly different tuning options too.
Please notice that until you commit the changes, the nextval function call in the default column of your primary key is missing.
Here’s how it should be structured:
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:System="clr-namespace:System;assembly=mscorlib"> <!-- Cool comment --> <Grid> <!-- Your UI elements go here --> </Grid> </Window>
the leak that just won't quit until today.
Make sure you use sqlalchemy >= 2.0.0
I was using sqlalchemy 1.4.28 with the latest pandas, which are no longer compatible (and could not upgrade because my airflow version was limited to sqlalchemy < 2.0.0)
See this discussion: https://github.com/pandas-dev/pandas/issues/58949#issuecomment-2153485545
Nimnankit minimum kinnikinnick n8jjnnnnnnnNnnn8Jn8nnnnnjn8NNn8jn8n888n8nnnjnjn8nnjjnNn8jnnn8nnnnnnkniNNni momino j
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4jjjnnnjn |
This package is now more than 2 years old. It wont work with updated flutter versions. Use image_gallery_saver_plus instead. Don't worry this package is based on original image_gallery_saver.
For details click here
Alright I figured out the answer for two questions about postfixes: Just use --output-hashing=none when build. Postfixes won't appear