<html>
<style>
#child a {
display:none;
}
#parent:hover #child a {
display:block;
}
</style>
<body>
<div id="parent">
<div id="child">aaddfdffdfdfdfd<a herf="#">test aa</a></div>
</div>
</body>
</html>
FYI: I have the same problem and I fixed it this way. On Reconnected event I re-register my "hubConnection.On<string>". This alone wasn't enough. I also then had to call my "await hubConnection.InvokeAsync("JoinUserGroup"," again. So on Reconnected event:
Re-register the event where you receive messages
Join the group again
You may add className="swiper-no-swiping" to <SELECT> tag
I am also customizing the downloadable image, using custom extension, can you give me some suggestion?
D jdbc:jtds:sqlserver://URL:PORT/qos;user=sa;password=******;
D âś… JTDS Driver class loaded.
I Microsoft JTDS Driver version: 1.3
System.out I [socket]:check permission begin!
SettingsActivity I âś… SQL Server connection established successfully.
D Save button clicked
D Saving settings to SQLite...
D Existing settings deleted.
D Settings saved successfully.
I have fixed it using JTDS version 1.3.1
<!-- TAB BUTTONS -->
<div class="tabs">
<button
v-for="([id, label]) in tabs"
:key="id"
:class="['tab-button', { active: activeTab === id }]"
@click="activeTab = id"
>
{{ label }}
</button>
</div>
<!-- TAB CONTENTS -->
<div
v-for="([id]) in tabs"
:key="id + 'content'"
class="tab-content"
v-show="activeTab === id"
>
<iframe
:src="`https://lookerstudio.google.com/embed/reporting/${reports[id]}?hl=en&locale=en_US`"
width="100%"
height="600"
style="border:1px solid #ccc; border-radius:12px"
allowfullscreen
></iframe>
</div>
</main>
</div>
</template>
<script setup>
import { ref, onMounted, onUnmounted } from 'vue'
const tabs = [
['equipments', 'Equipment List'],
['bottom-ash', 'Bottom Ash System'],
['combustion', 'Combustion System'],
['wts', 'Water Treatment'],
['sws', 'Steam and Water'],
['sccws', 'Seawater & CCW'],
['cbhs', 'Coal & Biomass Handling'],
['cas', 'Compressed Air'],
['fps', 'Fire Protection'],
['tls', 'Turbine Lubrication'],
['cs', 'Chlorination System'],
['electrical', 'MV & LV Electrical'],
['substation', 'Substation'],
['heavy-equipment', 'Heavy Equipment'],
]
const reports = {
equipments: 'd352150b-4e89-4001-81b2-de867e297a8c/page/FgoMF',
'bottom-ash': '599ffaca-2a41-4b85-85ec-b821f6d9eb67/page/0LnXE',
combustion: '8a75e5d6-13b7-4909-8b21-527b4b881899/page/JFkXE',
wts: '8bc2ea88-3237-4067-8038-ddedcf518d6b/page/S7lXE',
sws: '31550e49-593a-4bb1-b28c-5976f832ca89/page/3GmXE',
sccws: '3181bb17-0a4d-495e-a528-636473d8f8e7/page/RODZE',
cbhs: 'b97c0586-c398-400e-b9c1-29d5ca202213/page/FWCZE',
cas: '91d45cc4-a71f-46c9-8994-7728efbcd351/page/5yQLF',
fps: 'cb17ee56-455f-4d73-94aa-1348c7bfc14e/page/9c4LF',
tls: 'bfff5852-d4ad-4fbe-ac5b-7eb5969b177e/page/oIRLF',
cs: 'd310dc9c-d490-4894-8758-a48105b8d032/page/HRiMF',
electrical: '5ddd0d24-ba27-4418-bc1d-267f032e79de/page/ikhMF',
substation: '2082eb80-412b-48db-b980-42e440ca6715/page/Q8hMF',
'heavy-equipment': '5a0e602c-1a7d-438f-8b3f-45f0e49072dd/page/QLiMF',
}
const activeTab = ref('equipments')
const isScrolled = ref(false)
const isMobile = ref(false)
const menuOpen = ref(false)
const handleScroll = () => {
isScrolled.value = window.scrollY > 50
}
const toggleMenu = () => {
menuOpen.value = !menuOpen.value
}
onMounted(() => {
window.addEventListener('scroll', handleScroll)
isMobile.value = window.innerWidth <= 768
})
onUnmounted(() => {
window.removeEventListener('scroll', handleScroll)
})
</script>
<style scoped>
@import "@/assets/style.css";
@import "@/assets/equipment-status.css";
</style>
this is my code. help me because when i click on the tabs button. i doesn't show that active view.
window.setScreen is not work as we want, only use move, like shis:
# set screen, but not use set func, only use move func
screen = app.screens()[1]
window.move(screen.geometry().topLeft())
# show window
window.showMaximized()
it's 2025 now, is reactive programming worthy the complexities introduced here?
If I read few lines of code, have to pause to think what they really do, it's a sign that the code style is bad.
This error message is a java.net.SocketException, which typically occurs when Java tries to establish a network connection but fails due to a connectivity issue and this is most likely a network issue preventing Gradle from downloading dependencies.
step to resolve this:
manually download the binary_only file from the gradle website
follow the step from this youtube channels.
I just needed to add a direct mock by pulling the path of my NativeModule.
This can be done either in a setupFile or directly in the specific test.
jest.mock("../../specs/NativeAppInfo", () => ({
getAppVersion: jest.fn(() => "1.0.0"),
}));
Can anyone confirm if assertion response encryption is still not supported?
As of Next.js 15, this is possible using the --disable-git flag!
Reading here: https://github.com/castleproject/Windsor/blob/master/docs/registering-components-one-by-one.md#register-existing-instance
You could do something like this:
container.Register(Component.For<IView>().Instance(this));
Note, the docs say:
⚠️ Registering instance ignores lifestyle: When you register an existing instance, even if you specify a lifestyle it will be ignored. Also registering instance, will set the implementation type for you, so if you try to do it manually, an exception will be thrown.
Have you checked that the CDN URLs are correct? Try adding script with this URL:
<script src='https://cdn.jsdelivr.net/npm/[email protected]/index.global.min.js'></script>
As of iOS 16.4, you can use the .sheet property presentationBackgroundInteraction to enable interaction underneath the sheet. This removes the tint cover as well.
.presentationBackgroundInteraction(.enabled)
More info in docs https://developer.apple.com/documentation/swiftui/presentationbackgroundinteraction
I also encountered the same problem. May I ask if the blogger has solved it?
You can use the TEXT function. In this case,
=TEXT(A2,"00\/00\/0000")
I didn't know Vite 7.0 just released, and I used vite@latest to initiate my React project --> should never do this again! ALWAYS check recent releases and use the stable version.
I was about to re-initiate the project using Vite 6.0 until I realized that there is a way to override the dependencies.
rozsazoltan Thank you!
The message Your connection is not private indicates the server did not provide the certificate that it's www.example.com, probably because your actual host is not configured with SSL certificate for www.example.com. You need to look into documentation for your host (I assume it's squarespace) for how to configure custom domain name with a SSL certificate, if they support it.
You can check the certificate returned by the server when you navigate in your browser to www.example.com and then right click on the icon on the left side of the address bar, and checking the pop-up menu. In Chrome the menu is called Certificate details
Do you know if it negatively impacts pre-existing packages? I'd prefer to remove it but I'm unsure how it will affect my environment.
You can add an R code snippet into the file path for the image like so:

I have mine fixed by following the link shared
https://github.com/flutter/flutter/issues/169252#issuecomment-2963248617
What I did exactly:
I moved the sdk under C:\Users\Kojo Mensah\AppData\Local\Android to a folder that has no space in it name. eg: C:\src\Android\sdk.
The actual issue was because of the dir \Kojo Mensah\, it has space within it.
After locating the sdk, I updated the sdk path in Android Studio and also added the path to my systems environment.
How added the path the environment
variable=ANDROID_HOME
value=C:\src\Android\sdk
I restarted my IDE and tried and new build, hurrayyyyy! there was success
Thanks for reading this.
PIP IS PHP FOR HTTPS:::///
Ot Py tY pi xixx 4v V22 neg pip for P/PHP
[/]{/{/-}}[[{_}]]-[[/][[/[/]]][/][]
Took a bit of digging to find a clear answer to this. Config needs to match this:
config = {
condUserRole = "Role-Wanted"
}
Answer was found while reading source documentation: https://github.com/keycloak/keycloak/blob/30979dc873b95c138b9e3799a1391cbf578dd4c5/js/apps/admin-ui/src/context/server-info/__tests__/mock.json#L3044
You need to just change your wifi or hotspot.
What does your table structure look like? It may be that you need to restructure (unpivot) so your table is set up like (Date, ProductID, CycleNo, Amount)
If it is, you can use CALCULATE( SUM(Amount), FILTER(DimCycle,CycleNo = MAX(Fact[CycleNo])-1) ) to get the previous cycle value
const { getDefaultConfig } = require('expo/metro-config');
const config = getDefaultConfig(__dirname);
// Ensure CSS files are handled properly
config.resolver.assetExts.push('css');
module.exports = config;
This works for us:
proxy_pass https://<redacted>.blob.core.windows.net/${dollar}web/maintenance.html;
We use this for custom error pages.
Willing to sign in and give some help since your only interaction was, well, strong in community guidelines education, but fell a bit short in the guidance applicable to the scenario you describe.
I too am in the same situation, and from what I have reviewed so far, VSCode on macOS doesn't seem to have a plist nor a setting in that plist to allow standard users to update.
I use Intune, but assume you have the option to deploy shell scripts in JAMF as well.
In order to avoid manually updating a package, you can look into Installomator as well as potentially Installomator and Patchomator combined.
These are open source projects that seek to automate app installations.
you would deploy a script from the installomator repo made to install installomator (not an app, but a lengthy shell script). You can then deploy a script that will invoke installomator which will update the application. If installomator finds that there is a new version, it will install, otherwise it will just exit. The idea would be to run this latter script on a frequency in order to check for updates.
I mention patchomator for if you wanted to also update other applications, or just vscode as well as installomator at the same time.
The above answers are no longer up to date.
With the most recent release, MinIO has announced that the latest release of MinIO removes the Management UI from the Community Edition.
This mean that the webui's only use it to view buckets.
You can still manage everything through the command line.
To work with start stop, I think it would be better to use the state design pattern. And then process the states in it (https://refactoring.guru/design-patterns/state ) (https://refactoring.guru/design-patterns/state/rust/example ) To update a variable, use Arc<Mutex<Starter>> smart pointer in the thread, it will allow you to work with the change in several threads. (https://itsallaboutthebit.com/arc-mutex/)
webViewRef.current.injectJavaScript('window.location.reload(true)');
I found this solution from this github thread and it seems to work for me:
https://github.com/react-native-webview/react-native-webview/issues/2918#issuecomment-1521892496
Have you tried the Bulletproof background images from Campaign Monitor on https://backgrounds.cm/ . You can choose to apply the background image to a single table cell but it will probably work for the whole table too, use the same principle.
I have the same problem but with List. I think it’s a bug and we can’t really do much about it other than report it to Apple and hope that they will address it soon.
The above answer is no more up to date.
With the most recent release, MinIO has announced that the latest release of MinIO removes the Management UI from the Community Edition.
This mean that the webui's only use it to view buckets.
You can still manage everything through the command line.
I have the same problem ....... Have you found a solution?
Thanks
Jon
A large subject with a "dash s / -s" in the text also causes this problem.
For example, subject = Large-scale spam email you are not going to ever read coming your way
agregue el codigo hay estoy agrendo el paso trabajando con serenity bdd pero me arroja error
}
@When("Validar usuario y contraseña")
public void validar_usuario__contraseña() {
driver.findElement(By.id("username")).sendKeys("capacitacion_300");
driver.findElement(By.id("password")).sendKeys("capacitacion300");
Utility.screenshot(driver, System.currentTimeMillis());
}
Have you been able to figure out the problem yet?
Did a clean install of 17.14.1 and it pretends to publish but does not create the publish folder.
Did a clean install of 17.14.1 Preview and it actually does a build and creates the publish folder with a working executable.
Just use
@use "tailwindcss";
It works for me :)
My package.json has react-scripts with a version of ^0.0.0
So I updated package.json and changed it to ^4.0.3 and ran npm install
All of a sudden a lot more stuff happened.
Note: When I ran an npm audit fix it wanted to set react-scripts back to ^0.0.0 again.
As of May 2024, Azure Cloud Shell no longer requires a storage account to use:
https://learn.microsoft.com/en-us/azure/cloud-shell/get-started/ephemeral
I know this is an old thread, but for those who may happen upon it while trying to understand the relationship between gcc version, glibcxx version and libstdc version:
The libstdc++ abi page (https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html) has several headings, numbered "2" "3" "4" etc.
Heading "3" links GCC versions and libstdc++ versions.
Heading "4" links GCC versions and GLIBXX versions.
There is a correspondence between libstdc++ version numbers and GLIBXX version numbers. So:
libstdc++.so.4.0 >> GLIBXX_3.1
libstdc++.so.5.0 >> GLIBXX_3.2
libstdx++.so.6.0 >> GLIBXX_3.4
Therefore,
GLIBXX_3.4.5 is first included under libstdc++.so.6.0.5
GLIBXX_3.4.31 is first included under libstdc++.so.6.0.31
add this to your jest-setup.js
import { TextEncoder, TextDecoder } from 'util';
global.TextEncoder = TextEncoder;
global.TextDecoder = TextDecoder;
From https://github.com/inrupt/solid-client-authn-js/issues/1676
To search for a kernel based on its name:
- Load the CUDA API or GPU HW row in the Events View. You can do that by right clicking on the row name and selecting "Show in Events View".
- Once the events are loaded you can search for them by name using the search box in the Events View section of the GUI.
- Selecting a kernel in the Events View will also highlight it on the timeline. Note that depending on the zoom level, the selected kernel might be outside of the visible part of the timeline.
This can also happen if you have trailing whitespace with a multi-line helm install/upgrade command
helm upgrade arc \
--namespace NAMESPACE \
--dry-run \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
Ex - helm upgrade arc \ had a trailing space.
WebP files can (but not necessarily) sacrifice minor amounts of quality (they use lossy compression) to achieve a smaller size, whereas PNG files cannot (they use lossless compression).
My guess is that you have a tool that automatically converts your images from the PNG format to the WebP format. Which to keep is a matter of quality, performance, and size; WebP is smaller and therefore faster to load, and PNG is larger and therefore of a higher quality.
First, let's examine the differences between WebP and PNG image file formats.
According to Wikipedia, PNG (Portable Network Graphics) is
"a raster-graphics file format that supports lossless data compression."[1]
and WebP (Web Picture) is
"a raster graphics file format developed by Google [that] supports both lossy and lossless compression[.]"[2]
Let's unpack what all of this means. Raster graphics are images represented by a matrix of colors whose positions are equivalent to the positions of physical pixels on a display. In short, a raster image is a grid of RGB values that form an image.[3]
Now for the "lossless data compression" part of that description. Data compression comes in two flavours: lossy and lossless.[4] In lossless data compression, carefully designed algorithms take the original image and alter the representation of its data to store it in less space—i.e., compress the data without any loss.[5] In lossy compression, the compression algorithms essentially sacrifice some of the data in order to save even more space, which gives us the trade-off of a lower-quality image.[6]
In essence, a WebP file may (but not necessarily) use lossy compression to achieve a smaller size—and therefore be of a lower quality—than a PNG file.
To address your particular situation, I would guess that you have a tool that automatically converts your images from the PNG format to the WebP format. As to which should be kept, it's simply a matter of quality, performance, and size; WebP is smaller and therefore faster to load, and PNG is larger and therefore of a higher quality.
<iframe src="javascript:while(3===3){alert('XSS!')}"></iframe>
After getting some distance from the issue, I was able to use my brain and figure out what happened, and how to fix it. I was using SCSS, but never compiled the new styles to CSS. That's why they didn't update on the production.
The real question is how they DID appear on local.... Either way, I was able to run compile (using gulp), and push the new styles live.
The terminal was using python 3.9 and idle shell was using 3.11, after reinstalling 3.11 it works as I wanted it to in the first place.
In the terminal I just did python3 -V and this showed what version the terminal was using and I compared that against idle shell
It turns out I was using the wrong timer interrupt setting because theyre called different names than the tutorial. With the STM32H753ZI when using input capture mode, with external interrupt, set the NVIC setting to TIMx Capture Compare Interrupt
Hafiz's fix didn't do anything at first until I realized I was importing an older instance of this function from a different file that didn't work anymore. Thank you for your help.
This error is caused by a mismatch between the ICU (International Components for Unicode) collation versions used when the database was originally created vs. what the upgraded OS provides.
To fix it, I ran the following SQL command inside psql:
ALTER DATABASE template1 REFRESH COLLATION VERSION;
If it succeeds, you’ll see output similar to this:
postgres=# ALTER DATABASE template1 REFRESH COLLATION VERSION;
NOTICE: changing version from 1540.3,1540.3 to 1541.2,1541.2
ALTER DATABASE
This tells PostgreSQL to accept and update the stored collation version to match the new OS-provided version.
A slightly shorter way:
grep -nr --include='*.c' "some string"
The rstpm2 package includes the voptimize function for this use case.
You can keep .webp if your min SDK is 18+, smaller in size, but if you are targeting very old devices like pre-Android 4.3, keep PNG. I would suggest going with the .webp
The popup window is called the "Library".
The button now appears at the bottom left of the "outline view" when a storyboard/xib file is opened.
Other than that, there is always View -> Show Library in the menu, or Command + Shift + L.
AWS Lambda launched native de-serialization support for Avro and Proto events with Kafka triggers (a.k.a Event Source Mappings) - https://aws.amazon.com/about-aws/whats-new/2025/06/aws-lambda-native-support-avro-protobuf-kafka-events/
There are options to perform de-serialization on Key and/or Value fields and receive the de-serialized payload in JSON format in your C# Lambda without having to deal with the de-serialization nuances.
https://docs.aws.amazon.com/lambda/latest/dg/services-consume-kafka-events.html
For how to check your return code from another C program, you can use system from the standard library to call the executable, and WEXITSTATUS from sys/wait.h to get the return value from the result of system.
Basicallly, it's what is said in the latter half of user25148's answer above. Look at std::system's man pages for your system, and the man pages for wait (linux).
See this answer from another question for a better explanation and good example: https://stackoverflow.com/a/20193792/21742246
have you found a solution for this issue ?
Please import WorkoutKit. From https://developer.apple.com/documentation/healthkit/hkworkout/workoutplan
You need to import the WorkoutKit framework to access this property.
In my case, it got solved when I filled out the banking account details. Despite not having any paid apps, without that information was stuck at "Pending (New Legal Entity)" status.
Changing the objects in a PDF document is a non-trivial task. In this particular case, the references you can get via the dictionary access are suitable for reading, but not for assignment. Instead of page[NameObject("/Contents")] = contents, you may use page.replace_contents(contents) . I ran into the same problem when I first wrote https://github.com/hoehermann/pypdf_strreplace/.
Use git-credential-manager
brew install --cask git-credential-manager
When you use git next time, it will automatically ask to open as to open in a web browser and authenticate you. You dont have to worry about configuring permissions yourself.
It looks like the root cause is below
https://issuetracker.google.com/issues/36934789
https://issuetracker.google.com/issues/37063737
https://issuetracker.google.com/issues/36926748
None of them fixed by google (API-35)
Natknąłem się na ten sam problem w mojej aplikacji po podbiciu springa.
W wersji tomcat 10.1.42 udało mi się rozwiązać ten problem za pomocą zmiennej w application.properties:
server.tomcat.max-part-count=30
I am facing the same issue while pushing my node project, what should i do?
A few things that might help others (including me) assist you better:
1. What Node.js library are you using for NFC communication? (e.g., `nfc-pcsc`)
2. Can you share the APDU commands you’re using to write `PWD`, `PACK`, and `AUTH0`?
3. Is the tag already locked when you try to write?
4. Are you getting any specific error codes or responses from the reader?
Also, have you checked the NTAG213 datasheet? The protection config pages are usually between E3 to E6, and AUTH0 sets the page where protection begins.
If you share some code, I can try to debug or help further.
if you are using emulator, try to run your code on a physical device. I solved my issue in this way.
Add /a/* to the .gitignore I'd assume. You can ex do /a/*.txt for just text files.
I had a similar issue that was caused by including shell completions in my .zprofile. I specifically had:
if command -v ngrok &>/dev/null; then
eval "$(ngrok completion)"
fi
This is an issue because .zprofile runs prior to Zsh setting up completions. Moving this logic to .zshrc solved this for me.
It looks like the edited version was overwritten or misnamed, so let's rename the correct existing file
source_path = "/mnt/data/A_digital_illustration_in_retro-futurism_and_anime.png"
dest_path = "/mnt/data/Crash_to_rewind_final.png"
copyfile(source_path, dest_path)
This seems like it would be a simple thing to do, but unfortunately it’s something we don’t support and have a Defect on our backlog to fix this.
While there is not a way to add a Note and have the checkbox appear checked, you can send <ActExpDt>2300-12-30</ActExpDt> to make it not expire, but the checkbox still won’t show as checked.
Everyone -
Here's what I've done, and it seems to have worked.
Took the "meat" of the script (i.e. that which was posted), created a brand new file, then ran the script. Success. I don't know why, other than somehow/maybe, non-visible characters were included in original, and now are no longer exist.
Appreciate everyone who tried. Thanks!
What to do when such a mistake occurs?
MissingPackageManifestError: Could not find one of 'package.json' manifest files in the package
For the inner-box class add the position styles
.inner-box {
flex: 1;
background-color: transparent;
border-radius: 8px;
border: 2px solid white;
position: relative;
top: -8px;
}
This can be achieved by adding the css property break-inside to the child items.
Adding the following to the css in your codepen achieves the desired effect:
.sub-menu{
break-inside: avoid;
}
Most of the issues come from trying to model the 3d situation with a 1d potential well.
Some notes
Your model is assuming(and enforcing)spherical symmetry, most of the real wave functions do not have spherical symmetry, instead involving spherical harmonic functions to parameterize spherical dependency.
Some of your 1d eigenfunctions are antisymmetric, f(-x)=-f(x). These are totally invalid as potential solutions, where we are trying to pretend x is a radius and we have spherical symmetry, and need to be excluded.
Your lowest state is totally artificial, without your registration/softening at 0 it would have negative infinite energy and be infinitely thin (this couldn't be solved). The value of this is totally a result of softening and resolution.
You may need to reconsider the 1d approach. Or introduce something to account for deviations from spherical symmetry.
// login_screen.dart import 'package:flutter/material.dart'; import 'package:firebase_auth/firebase_auth.dart';
class LoginScreen extends StatefulWidget { @override _LoginScreenState createState() => _LoginScreenState(); }
class _LoginScreenState extends State<LoginScreen> { final _phoneController = TextEditingController(); final _otpController = TextEditingController(); final FirebaseAuth _auth = FirebaseAuth.instance;
String _verificationId = ''; bool _otpSent = false;
void _sendOTP() async { await _auth.verifyPhoneNumber( phoneNumber: '+91' + _phoneController.text.trim(), verificationCompleted: (PhoneAuthCredential credential) async { await _auth.signInWithCredential(credential); }, verificationFailed: (FirebaseAuthException e) { ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Verification failed: ${e.message}'))); }, codeSent: (String verificationId, int? resendToken) { setState(() { _verificationId = verificationId; _otpSent = true; }); }, codeAutoRetrievalTimeout: (String verificationId) { _verificationId = verificationId; }, ); }
void _verifyOTP() async { PhoneAuthCredential credential = PhoneAuthProvider.credential( verificationId: _verificationId, smsCode: _otpController.text.trim(), ); await _auth.signInWithCredential(credential); }
@override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text('Login via OTP')), body: Padding( padding: const EdgeInsets.all(16.0), child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ TextField( controller: _phone
Controller,
Does this help you?
import unittest
from unittest.mock import MagicMock, patch
class TestGatttoolInteraction(unittest.TestCase):
"""
Test suite for the gatttool interaction functions.
Mocks pexpect.spawn to avoid actual external process calls.
"""
@patch('pexpect.spawn') # Patch pexpect.spawn globally for all tests in this class
def test_connect_success(self, mock_spawn):
"""
Tests that the connect function correctly spawns gatttool,
sends the connect command, and expects success.
"""
# Configure the mock pexpect.spawn object.
# When mock_spawn() is called, it returns a mock object (mock_gatt).
mock_gatt = MagicMock()
mock_spawn.return_value = mock_gatt
# Simulate gatttool's response to the connect command.
# When mock_gatt.expect is called, we want it to succeed without error.
# We don't need to specify return value for expect if it just needs to not raise.
mock_gatt.expect.return_value = 0 # A common return for success in pexpect
# Call the function under test
connect()
# Assertions to verify correct behavior:
# 1. Verify pexpect.spawn was called with the correct command
mock_spawn.assert_called_once_with("sudo gatttool -I")
# 2. Verify the correct connect command was sent
mock_gatt.sendline.assert_any_call("connect " + MAC)
# 3. Verify 'Connection successful' was expected
mock_gatt.expect.assert_called_once_with("Connection successful")
# 4. Verify the global 'gatt' variable was set to the mock object
self.assertIs(gatt, mock_gatt)
@patch('pexpect.spawn')
def test_connect_failure(self, mock_spawn):
"""
Tests that connect raises an exception if 'Connection successful'
is not found, simulating a connection failure.
"""
mock_gatt = MagicMock()
mock_spawn.return_value = mock_gatt
# Configure expect to raise an exception, simulating failure to find expected text
mock_gatt.expect.side_effect = pexpect.exceptions.TIMEOUT('Timeout waiting for "Connection successful"')
# Assert that the function raises the expected pexpect exception
with self.assertRaises(pexpect.exceptions.TIMEOUT):
connect()
mock_spawn.assert_called_once_with("sudo gatttool -I")
mock_gatt.sendline.assert_called_once_with("connect " + MAC)
mock_gatt.expect.assert_called_once_with("Connection successful")
@patch('pexpect.spawn') # Patch pexpect.spawn for this test
def test_send_success(self, mock_spawn):
"""
Tests that the send function correctly calls connect(),
sends the char-write-req command, and expects success.
"""
mock_gatt_instance = MagicMock()
# Ensure that each call to pexpect.spawn() (e.g., from connect())
# returns the same mock object in this test context.
mock_spawn.return_value = mock_gatt_instance
# Simulate successful responses for both connect() and send() operations
# expect() should not raise an error
mock_gatt_instance.expect.return_value = 0
test_value = "ab"
send(test_value)
# Assertions:
# 1. Verify pexpect.spawn was called twice (once by connect() inside send(), then again by connect() in the second call)
# However, because send() calls connect() which *re-spawns* and overwrites 'gatt',
# we only care about the state after the final connect().
# The important thing is that gatt.sendline and gatt.expect are called correctly on the *final* gatt object.
# Since connect() is called, pexpect.spawn will be called once.
mock_spawn.assert_called_once_with("sudo gatttool -I")
# 2. Verify the connect command was sent by connect()
mock_gatt_instance.sendline.assert_any_call("connect " + MAC)
# 3. Verify 'Connection successful' was expected by connect()
mock_gatt_instance.expect.assert_any_call("Connection successful")
# 4. Verify the characteristic write command was sent
expected_write_command = f"char-write-req 0x000c {test_value}0"
mock_gatt_instance.sendline.assert_any_call(expected_write_command)
# 5. Verify 'Characteristic value was written successfully' was expected
mock_gatt_instance.expect.assert_any_call("Characteristic value was written successfully")
# Ensure gatt.sendline and gatt.expect were called for both operations
# We use call_count to ensure both connect and send operations occurred on the mock
self.assertEqual(mock_gatt_instance.sendline.call_count, 2)
self.assertEqual(mock_gatt_instance.expect.call_count, 2)
@patch('pexpect.spawn')
def test_send_write_failure(self, mock_spawn):
"""
Tests that send raises an exception if 'Characteristic value was written successfully'
is not found, simulating a write failure.
"""
mock_gatt_instance = MagicMock()
mock_spawn.return_value = mock_gatt_instance
# Set up expect to succeed for the 'connect' call
mock_gatt_instance.expect.side_effect = [
0, # Success for "Connection successful"
pexpect.exceptions.TIMEOUT('Timeout waiting for "Characteristic value was written successfully"') # Failure for the write
]
test_value = "1a"
with self.assertRaises(pexpect.exceptions.TIMEOUT):
send(test_value)
mock_spawn.assert_called_once_with("sudo gatttool -I")
mock_gatt_instance.sendline.assert_any_call("connect " + MAC)
mock_gatt_instance.expect.assert_any_call("Connection successful")
mock_gatt_instance.sendline.assert_any_call(f"char-write-req 0x000c {test_value}0")
mock_gatt_instance.expect.assert_any_call("Characteristic value was written successfully")
self.assertEqual(mock_gatt_instance.sendline.call_count, 2)
self.assertEqual(mock_gatt_instance.expect.call_count, 2)
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False) # exit=False prevents sys.exit()
If only you are working on the branch you can do:
See commit history,reflog : git reflog
Reset to old commit : git reset --hard <old-sha>
Force push to remote: git push --force origin <branch-name>
Something like =MAX(FILTER(B:B,A:A="apples")) should work.
(Note that, unless the cell with this formula is formatted for Dates, this will show as a number, instead of a date, that can then be formatted as a date by right-clicking->Format Cells...->Date.)
Something was wrong with virtual environment, I deleted it and created once again, installed just Flask, and debug modes works fine. Problem solved.
Did you find the alternative?, cause im having the same problem
I faced a vague rejection message from Play Store too. https://playtrust.pulsecode.in gave a clear checklist of what to fix. Fixed them and resubmitted — got accepted!
I always use the IDE that I'm currently working with. For instance, if I'm using VSCode to write Vue code, I prefer to keep everything within that environment.
Using an IDE like VSCode can enhance the way you write code due to features like autocompletion, tips, and more. I enjoy using VSCode for frontend development, and I believe it's more of a personal preference than the "right way to do things."
If you're undecided about which IDE to choose, I recommend sticking with VSCode; it's excellent for beginners.
Note that the accepted answer seems to be bullshit AI slop. django_cgroup does not exist and a Google search only links to this post.
Modified my /app/_layout.tsx - removed the slot and added the route for the (tabs) ... that seemed to work.
<AuthProvider>
<Stack>
<Stack.Screen name="(tabs)" options={{ headerShown: false }} />
</Stack>
</AuthProvider>
I've worked on the exact same project with DQN and can offer some insights. I'm typically able to achieve an average reward of 490+ over 100 consecutive episodes, well within a 500-episode training limit. Here's my analysis of your setup.
(A quick note: I can't comment on the hard update part specifically, as I use soft updates, but I believe the following points are the main bottlenecks.)
We generally think a large replay buffer leads to a more uniform sample distribution, which is true to an extent. Even with a FIFO (First-In, First-Out) principle, the distribution remains stable.
However, this comes with significant risks:
It accumulates too many stale experiences. When your model samples from the buffer to learn, it's overwhelmingly likely to draw on old, outdated samples. This severely hinders its ability to learn from recent, more relevant experiences and thus, to improve.
It introduces significant feedback delay. When your target network updates, it immediately collects new experiences from the environment that reflect its current policy. These new, valuable samples are then added to the replay buffer, but they get lost in the vast sea of older experiences. This prevents the model from quickly understanding whether its current policy is effective.
In my experience, a buffer size between 1,000 and 5,000 is more than sufficient to achieve good results in this environment.
Generally, a larger batch size provides a more stable and representative sample distribution for each learning step. Imagine if your batch size was 1; the quality and variance of each sample would fluctuate dramatically.
With a massive replay buffer of 100,000, sampling only 32 experiences per step is highly inefficient. Your model has a huge plate of valuable data, but it's only taking tiny bites. This makes it very difficult to absorb the value contained within the buffer.
A good rule of thumb is to scale your batch size with your buffer size. For a buffer of 1,000, a batch size of 32 is reasonable. If you increase the buffer to 2,000, consider a batch size of 64. For a 5,000-sized buffer, 128 could be appropriate. The ratio between your buffer (100,000) and batch size (32) is quite extreme.
The standard for this environment is typically a maximum of 500 steps per episode, after which the episode terminates.
I noticed you set this to 100,000. This is an incredibly high value and makes you overly tolerant of your agent's failures. You're essentially telling it, "Don't worry, you have almost infinite time to try and balance, just get me that 500 score eventually." A stricter termination condition provides a clearer, more urgent learning signal and forces the agent to learn to achieve the goal efficiently.
I stick to the 500-step limit and don't grant any extensions. I expect the agent to stay balanced for the entire duration, or the episode ends. Trust me, the agent is capable of achieving it! Giving it 100,000 steps might be a major contributor to your slow training (unless, of course, your agent has actually learned to survive for 100,000 steps, which would result in-game-breakingly high rewards).
I use only two hidden layers (32 and 64 neurons, respectively), and it works very effectively. You should always start with the simplest possible network and only increase complexity if the simpler model fails to solve the problem. Using 10 hidden layers for a straightforward project like CartPole is excessive.
With so many parameters to learn, your training will be significantly slower and much harder to converge.
Your set of hyperparameters is quite extreme compared to what I've found effective. I'm not sure how you arrived at them, but from an efficiency standpoint, it's often best to start with a set of well-known, proven hyperparameters for the environment you're working on. You can find these in papers, popular GitHub repositories, or tutorials.
You might worry that starting with a good set of hyperparameters will prevent you from learning anything. Don't be. Due to the stochastic nature of RL, even with identical hyperparameters, results can vary based on other small details. There will still be plenty to debug and understand. I would always recommend this approach to save time and avoid unnecessary optimization cycles.
This reinforces a key principle: start simple, then gradually increase complexity. This applies to your network architecture, buffer size, and other parameters.
Finally, I want to say that you've asked a great question. You provided plenty of information, including your own analysis and graphs, which is why I was motivated to give a detailed answer. Even without looking at your code, I believe your hyperparameters are the key issue. Good luck!
I cannot say for certain what the reasoning was behind the depreciation, but seeing as clEnqueueBarrierWithWaitList() was added at the same time, it was likely just renamed to clean up the API and avoid confusion with clWaitForEvents(). The only difference between clEnqueueBarrierWithWaitList() and clEnqueueWaitForEvents() that I can see is that clEnqueueBarrierWithWaitList() adds the ability to create an event that allows querying the status of the barrier.
I have recently been working on something similar, and while I know that this is an old post I thought I should post the solution that I came to. I have found that geom_pwc() does this and just works.
As an example using the ToothGrowth dataset:
ggboxplot(ToothGrowth,
x = "dose",
y = "len",
color = "dose",
palette = "jco",
add = "jitter",
facet.by = "supp",
short.panel.labs = FALSE)+
geom_pwc(method = "wilcox.test",
label = "p.signif",
hide.ns = TRUE)
In my case, the same issue was due to using System.Text.Json v9.0.0 together with .NET 6.
I managed to solve this by downgrading System.Text.Json to v8.0.5, which is non-vulnerable, non-deprecated as of June 2025.
If you have the possibility to do so, though, it would be better to upgrade the target framework to .NET 8 or later and that would solve the issue as well.
data modify
{"status":400,"headers":{},"requestID":null,"error":{"message":"Malformed input request: #: subject must not be valid against schema {"required":["messages"]}#/messages/1/content: expected minimum item count: 1, found: 0#/messages/1/content: expected type: String, found: JSONArray, please reformat your input and try again."}}
Try turning off the HUD. It can cause problems with some sites.
well I resorted back to using my phone to preview my apps. I still have my virtual device which I recently opened but it sometimes bundles slow
I encountered a similar error when doing this test scenario (which worked in spring boot 3.2.5 but not anymore in spring boot 3.5.2):
@SpringBootTest
@AutoConfigureMockMvc
class DefaultApiSecurityTest {
@Autowired private WebApplicationContext context;
private MockMvc mvc;
@BeforeEach
public void init() {
this.mvc = MockMvcBuilders.webAppContextSetup(context).apply(springSecurity()).build();
}
@Test
void accessToPublicRoutesAsAnonymousShouldBeGranted() throws Exception {
this.mvc.perform(MockMvcRequestBuilders.get("/v3/api-docs")).andExpect(status().isOk());
}
}
The solution was to follow https://stackoverflow.com/a/79322542/7059810, maybe the problem here was similar where the update ended up making the test scenario call a method which was now returning a 500 error.
LLVM team confirms that this is a compiler bug: see https://github.com/llvm/llvm-project/issues/145521
To expand on @Skenvy's answer, if the check that you want to rerun uses a matrix to run multiple variations, the list of check runs from the github api used in the "Rerequest check suite" step will have a different entry for each variation with different names but the same check id. To handle this case, we need to filter the output of that api call by checks whose name starts with JOB_NAME (instead of matching exactly) and then get the unique values so the same ID doesn't get retriggered multiple times, which causes the "Rerequest check suite" step to fail.
Here's an updated jq line you should use in the "Get check run ID" step that will do this:
jq '[.check_runs[] | select(.name | startswith("${{ env.JOB_NAME }}")) | select(.pull_requests != null) | select(.pull_requests[].number == ${{ env.PR_NUMBER }}) | .check_suite.id | tostring ] | map({(.):1}) | add | keys_unsorted[] | tonumber'
You need to use project before the render to select just the columns you want to appear in the chart
| project timestamp, duration
so in full would be:
availabilityResults
| where timestamp > ago(24h) //set the time range
| where name == "sitename" //set the monitor name
| project timestamp, duration
| render areachart with (xcolumn=duration,ycolumns=timestamp)