When you configure an AWS CLI profile for SSO, every command you run—even those against LocalStack—requires authentication via a valid SSO session. The CLI automatically checks for a cached SSO access token and, if missing or expired, prompts you to run aws sso login
. Only after that token is retrieved can the CLI issue (mock or real) AWS API calls. This is documented in the AWS CLI behavior around IAM Identity Center sessions and SSO tokens.
AWS Doc: https://docs.aws.amazon.com/cli/latest/reference/sso/login.html?
"To login, the requested profile must have first been setup using aws configure sso
. Each time the login
command is called, a new SSO access token will be retrieved."
For LocalStack, you can bypass this by using a non-SSO profile with dummy static credentials (aws_access_key_id
and aws_secret_access_key
), since LocalStack does not validate them. This prevents unnecessary SSO logins while still allowing AWS CLI and SDKs to function locally.
This is because DocumentDB does not support isMaster
; it utilizes hello
instead, particularly in newer releases (v5.0). Ensure your driver version is compatible and uses hello
, or upgrade the cluster to v5.0 for better API alignment with MongoDB
You may try Spectral confocal technology.
Spectral confocal technology is a non-contact method used to measure surface height, particularly for micro and nano-scale measurements. It works by analyzing the spectrum of reflected light from a surface, where different wavelengths correspond to different heights.
Usually it been used to measure heights, but you can get intensity of different surface from the results, but maybe need some normalization to convert to the intensity of white light.
you can make use of --disableexcludes=all
with the yum install command, which overrides all the excludes from /etc/yum.conf file.
in you case, yum install nginx --disableexcludes=all
SELECT NAME, TYPE, LINE, TEXT
FROM USER_SOURCE
WHERE TYPE = 'PROCEDURE'
AND UPPER(TEXT) LIKE '%PALABRA%';
The proble get resolved.
follow below step if sudo nano /usr/share/libalpm/hooks/60-dkms.hook
or sudo nano /usr/share/libalpm/hooks/90-mkinitcpio-install.hook
files doesn't exist and you have /usr/share/libalpm/hooks/30-systemd-udev-reload.hook
this file.
Here are the steps that I followed:
sudo mkdir -p /etc/pacman.d/hooks
sudo nano /etc/pacman.d/hooks/30-systemd-udev-reload.hook
[Trigger]
Type = Path
Operation = Install
Operation = Upgrade
Operation = Remove
Target = usr/lib/udev/rules.d/*
[Action]
Description = Skipping udev reload to avoid freeze
When = PostTransaction
Exec = /usr/bin/true
and the problem is resolved now.
This kind of issue may also occur due to the expiration of your Apple Developer account. If your App Store membership expire that time you may also face similar type issues. Please make sure your account is renewed.
I had a similar use case a few years ago, so I created a small package that converts trained XGBClassifier and XGBRegressor models into Excel formulas by exporting their decision trees. https://github.com/KalinNonchev/xgbexcel
As far as I understand, there is no point in considering approximations that are slower than the standard acos() or acosf() functions. Achieving same performance for correctly rounded double-precision values is extremely difficult, if not impossible, but it is quite possible to improve performance for values with error close to one of single-precision format. Therefore, even those approximations that seem successful should be tested for performance.
Since the arccosine of x has an unbounded derivative at points x=+-1, the approximated function should be transformed so that it becomes sufficiently smooth. I propose to do this as follows (I think this is not a new way): is constructed an approximation of the function
f(t) = arccos(t^2)/(1-t^2)^0.5
using the Padé-Chebyshev method, where t=|x|^0.5, -1<=t<=1. The function f(t) is even, fairly smooth, and can be well approximated by both polynomial and a fractional rational functions. The approximation is as follows:
f(t) ≈ (a0+a1*t^2+a2*t^4+a3*t^6)/(b0+b1*t^2+b2*t^4+b3*t^6) = p(t)/q(t).
Considering the relationship between the variables t and x, we can write:
f(x) ≈ (a0+a1*|x|+a2*|x|^2+a3*|x|^3)/(b0+b1*|x|+b2*|x|^2+b3*|x|^3) = p(x)/q(x).
After calculating the function f(x), the final result is obtained using one of the formulas:
arccos(x) = f(x)*(1-|x|)^0.5 at x>=0;
arccos(x) = pi-f(x)*(1-|x|)^0.5 at x<=0.
The coefficients of the fractional rational function f(x), providing a maximum relative error of 8.6E-10, are follows:
a0 = 1.171233654022217, a1 = 1.301361441612244, a2 = 0.3297972381114960, a3 = 0.01141332555562258;
b0 = 0.7456305027008057, b1 = 0.9303402304649353, b2 = 0.2947896122932434, b3 = 0.01890071667730808.
These coefficients are specially selected for calculations in single precision format.
An example of code implementation using the proposed method can be found in the adjacent topic Fast Arc Cos algorithm?
i think the issue You are asked to design a program that displays a message box showing a custom message entered by the user. The box should include options such as OK, Cancel, Retry, and Exit. How would you implement this?
Would you like me to make a few different variations of the question (same grammar, ~220 characters) so you can choose the best one?
A workaround to the original code could be:
template<int...n> struct StrStuff {
template<int...n0> explicit StrStuff(char const(&...s)[n0]) {}
};
template<int...n> StrStuff(char const(&...s)[n])->StrStuff<n...>;
int main() {
StrStuff g("apple", "pie");
}
But I still wonder why the original code can/can't compile in different compilers.
Adding those configurations to `application.properties` just worked as advised in this Github issue.
server.tomcat.max-part-count=50
server.tomcat.max-part-header-size=2048
The issue is that your Docker build does not have your Git credentials.
If it is a private repo, the simplest fix is to make a build argument with a personal access token:
ARG GIT_TOKEN
RUN git clone https://${GIT_TOKEN}@github.com/username/your-repo.git
Then build with:
docker build --build-arg GIT_TOKEN=your_token_here -t myimage .
Just make sure that you are using a personal access token from GitHub, and not your password - GitHub does not allow password auth anymore.
If it is a public repo and is still not working, try:
RUN git config --global url."https://".insteadOf git://
RUN git clone https://github.com/username/repo.git
Sometimes the git:// protocol will mess up Docker images.
Edit: Also, as mentioned in comments, be careful about tokens in build args - because they may appear in image history, and this could pose a risk. For production purposes, consider using Docker BuildKit's --mount=type=ssh option instead.
For multiples of 90°, you can use page.set_rotation() For arbitrary angles, render the page as an image with a rotation matrix, then insert it back into a PDF if needed—this isn’t a true vector transformation, but a raster workaround, as MuPDF and most PDF formats do not natively support non-orthogonal page rotations.
Para cumplir tus requerimientos en Batch Script:
1. Mover un archivo de una ruta a otra: Se usa el comando move.
2. Renombrar el archivo y cambiar la fecha juliana a DDMMYYYY: Se requiere extraer la fecha juliana del nombre, convertirla y renombrar el archivo.
Aquí tienes un ejemplo de código Batch Script que realiza ambas tareas. Supongamos que el archivo original tiene un nombre como archivo_2024165.txt (donde 2024165 es la fecha juliana: año 2024, día 165).
-----------------------------------------------------------------------------------------------------------------------------------
@echo off
setlocal enabledelayedexpansion
REM Configura las rutas
set "origen=C:\ruta\origen\archivo_2024165.txt"
set "destino=C:\ruta\destino"
REM Mueve el archivo
move "%origen%" "%destino%"
REM Extrae el nombre del archivo movido
for %%F in ("%destino%\archivo_*.txt") do (
set "archivo=%%~nxF"
REM Extrae la fecha juliana del nombre
for /f "tokens=2 delims=_" %%A in ("!archivo!") do (
set "fechaJuliana=%%~nA"
set "anio=!fechaJuliana:~0,4!"
set "dia=!fechaJuliana:~4,3!"
REM Convierte día juliano a fecha DDMMYYYY
powershell -Command "$date = [datetime]::ParseExact('%anio%', 'yyyy', $null).AddDays(%dia% - 1); Write-Host $date.ToString('ddMMyyyy')" > temp_fecha.txt
set /p fechaDDMMYYYY=<temp_fecha.txt
del temp_fecha.txt
REM Renombra el archivo
ren "%destino%\!archivo!" "archivo_!fechaDDMMYYYY!.txt"
)
)
endlocal
-----------------------------------------------------------------------------------------------------------------------------------
odifica las rutas de origen y destino según tus necesidades.
• El script usa PowerShell para convertir la fecha juliana a DDMMYYYY, ya que Batch puro no tiene funciones de fecha avanzadas.
• El nombre final será archivo_DDMMYYYY.txt.
Primitives, and their object counterparts are not proxyable as per the spec. If you need the value to live in the request scope, use a wrapper class that is actually proxyable. If you make it @Dependent, you will be able to inject it as an Integer, but there may be overhead because of the nature of dependent beans.
You can open up 2 tabs or windows on the same view and have different preview devices showing. Hate this, but it works.
1. Check Project Java Build Path
2.Update Installed JREs in Eclipse
3. Project Compiler Compliance Level
4. Check Source and Target Compatibility
5. Restart Eclipse/Refresh Workspace
6. Check for Errors in Problems View
7. Update Content Assist Settings
The build system generates the SDL3 library in the build folder, but imgui was not searching in the correct directory due to the command issue. target_link_directories(imgui PUBLIC SDL3)
in the vendors/imgui/CMakeLists.txt
file, on the last line, needs to be target_link_libraries(imgui PUBLIC SDL3::SDL3)
.
I can see why you'd want to build this feature, but unfortunately, detecting whether a user has an active screen-sharing session via any external application (like TeamViewer, Zoom, or Google Meet) isn't directly possible from a web-based application using JavaScript. This is a deliberate limitation for some security and privacy reasons:
You can also do the following :
Go to Settings
Type "update mode" in the search bar
Ensure that "Update: Mode" is NOT set as "none"
Then "Check for Updates..." would be in the "Code" menu.
I couldn't find any other way to create a share folder than creating it manually using PowerShell. This is what I did in my code. Thank you everyone for their replies.
- name: 'Create Folder shortcut on G: drive'
ansible.windows.win_shell: |
$WshShell = New-Object -ComObject WScript.Shell
$Shortcut = $WshShell.CreateShortcut("G:\\Folder1.lnk")
$Shortcut.TargetPath = "\\\\SERVER1\\Folder1"
$Shortcut.Save()
Các bạn có thể tham khảo bài viết Các kiểu dữ liệu trong MySQL https://webmoi.vn/cac-kieu-du-lieu-trong-mysql/ ở bên mình.
This works great with custom fonts and updating the view's frame causing layout changes:
Here's the code:
public struct FirstLineCenterID: AlignmentID {
static func defaultValue(in d: ViewDimensions) -> CGFloat {
d[VerticalAlignment.center]
}
}
/// Custom vertical alignment used to coordinate views against the **first line center**.
extension VerticalAlignment {
static let firstLineCenter = VerticalAlignment(FirstLineCenterID.self)
}
// MARK: - FirstLineCenteredLabel
/// A `Label`-like view that displays a leading icon and a text label, aligning the icon
/// to the **vertical midpoint of the text’s first line**.
struct FirstLineCenteredLabel<Icon>: View where Icon : View {
let text: String
let spacing: CGFloat?
let icon: Icon
/// Cached measured height of a single line for the current font.
@State private var firstLineHeight: CGFloat?
/// The effective font pulled from the environment; used by both visible and measuring text.
@Environment(\.font) var font
init(
_ text: String,
spacing: CGFloat? = nil,
@ViewBuilder icon: () -> Icon
) {
self.text = text
self.spacing = spacing
self.icon = icon()
}
var body: some View {
HStack(alignment: .firstLineCenter, spacing: spacing) {
let text = Text(text)
icon
// aligns by its vertical center
.alignmentGuide(.firstLineCenter) { d in d[.top] + d.height / 2 }
.font(font)
text
.font(font)
.fixedSize(horizontal: false, vertical: true)
// aligns by the first line's vertical midpoint
.alignmentGuide(.firstLineCenter) { d in
let h = firstLineHeight ?? d.height
return d[.top] + h / 2
}
// Measure the natural height of a single line **without impacting layout**:
// a 1-line clone in an overlay with zero frame captures `geo.size.height` for the
// current environment font. This avoids the `.hidden()` pitfall which keeps layout space.
.overlay(alignment: .topLeading) {
text.font(font).lineLimit(1).fixedSize()
.overlay(
GeometryReader { g in
Color.clear
.onAppear { firstLineHeight = g.size.height }
.onChange(of: g.size.height) { firstLineHeight = $0 }
}
)
.opacity(0)
.frame(width: 0, height: 0)
.allowsHitTesting(false)
.accessibilityHidden(true)
}
}
}
}
Usage:
var body: some View {
VStack {
FirstLineCenteredLabel(longText, spacing: 8) {
Image(systemName: "star.fill")
}
FirstLineCenteredLabel(shortText, spacing: 8) {
Image(systemName: "star.fill")
}
Divider()
// Just to showcase that it can handle various font sizing.
FirstLineCenteredLabel(longText, spacing: 8) {
Image(systemName: "star.fill")
.font(.largeTitle
}
.font(.caption)
}
}
private var longText: String {
"This is a new label view that places an image/icon to the left of the text and aligns it to the text's first line vertical midpoint."
}
private var shortText: String {
"This should be one line.
}
import time
def slow_print(text, delay=0.04):
for char in text:
print(char, end='', flush=True)
time.sleep(delay)
print()
def escolha_personagem():
slow_print("Escolha sua classe:")
slow_print("1 - Guerreiro")
slow_print("2 - Mago")
slow_print("3 - Ladino")
classe = input("Digite o número da sua escolha: ")
if classe == "1":
return "Guerreiro"
elif classe == "2":
return "Mago"
elif classe == "3":
return "Ladino"
else:
slow_print("Escolha inválida. Você será um Aventureiro.")
return "Aventureiro"
def inicio_historia(classe):
slow_print(f"\nVocê acorda em uma floresta sombria. Como um(a) {classe}, seu instinto o guia.")
slow_print("De repente, um velho encapuzado aparece diante de você...")
slow_print("Kael: 'Você finalmente despertou
You've said in the comments that
I could see some extra bytes added in the corrupted file.
... well there's your problem, rather than "flushing/closing".
What extra bytes? I wonder if it is the "Byte Order Mark". Read about it here https://stackoverflow.com/a/48749396/1847378 - this article is about how to overcome a file/input stream with a BOM that you don't want. That's the opposite problem, though.
Maybe the use of stream
is unhelpfully messing around with the stream. Just for a test at least, how about passing ByteArrayOutputStream()
to the the outputDocument(..)
and then passing byte[]
(from ByteArrayOutputStream.toByteArray()
) to the JAX RS Response
?
According to this, the maximum deepsleep is 2^45 microseconds, or just over 407 days.
Turns out having Claude test things in Chrome on it's own, made copies of Chrome in a temp file that were never deleted. The path was this:
/private/var/folders/c3/6s_l3vn96s5b8b9f08szx05w0000gn/X/com.google.Chrome.code_sign_clone/
I deleted all the temp files in here and got back 200gb of space.
For Android you can use React Native's Own Permissions (PermissionsAndroid) and for IOS the library you are using in one of the bes libraries but it has some minor issues.
for IOS you can use 3 libraries saperately
react-native-geolocation-service (for location).
react-native-documents (for documents).
@react-native-camera-roll/camera-roll (for Camera)
Some .so
library files in the command output are 2**12
== 4KB aligned. So, the message that ELF alignment check failed.
Please check this link for detailed answer. I am posting a summary of what you need to do here:
Steps to support 16KB page alignment in Flutter app: (First of all create a backup of your original project, and try the following steps in a copy of the project. It's always good to have a backup.)
As stated in the official documentation, you need to update AGP to version 8.5.1 or higher to be able to compile with 16KB page size. The documentation says to upgrade NDK version to 28
but the versions 26
and 27
also compatible. You may leave it to any among 26, 27 or 28. The respective files to edit are: android/settings.gradle
and look for the line like id "com.android.application" version "8.7.3" apply false
and change to compatible version, and file android/app/build.gradle
where you may change the like as ndkVersion "27.0.12077973"
.
Your project code, if contains native code, must update to support 16KB page size.
Your project dependencies listed in pubspec.yaml file, both direct and dev dependencies may need to be updated if they depend on native code. If you can identify the individual ones, you may update only those to the appropriate version, or else you should update all the dependency packages in your pubspec.yaml file. Also, the transient dependencies should be updated. How: To update the direct and dev dependencies, update the corresponding version number to each packages in the pubspec.yaml file and after that run flutter clean; flutter pub get
from the project root which will update the direct and dev dependencies listed in pubspec.yaml file. Now to upgrade the transient dependencies: You can see the table with the command: flutter pub outdated
and update the transient dependencies with flutter pub upgrade
command (or flutter pub upgrade --major-versions).
After you've updated the dependencies, try to run the project. Additional configuration changes may be asked and displayed as error messages. You should do as suggested. You may update your question if you need help with that.
After you fix these, check again for the 16KB alignment of .so
packages and also test run in 16KB compliant emulator or devices.
Note: To update the dependencies in your pubspec.yaml VS Code extensions like Version Lens can ease the process. Similar should also exist for Android Studio.
The emoji is a non-existing topic, so publishing of messages also fail with the following message: InvalidTopicException: Invalid topics: [😃]. Everything good so far.
The problem is that I now have an infinite loop.
I honestly don't get what else do you expect?
Let's see: you use the DefaultErrorHandler as is, which works at Spring container level & covers events happening BEFORE (and AFTER) whatever happens in the listener POJO.
Then, when it exhausts all attempts prescribed by your BackOff, it executes component that itself throws an error.
It's executing in the same container context, where does it fall then? Back into your DefaultErrorHandler!
And here you go again.
I'm not quite sure what's the reason for "experiment" like this besides sheer boredom and nothing-else-to-do.
But IF there's an engineering reason for this, you can extend the DefaultErrorHandler by overriding the appropriate method of it, likely handleOne() (or implement a whole CommonErrorHandler yourself) and deal with the situation.
And to top all that...
I throw an exception in the Kafka listener code, so messages will fail and this error handler will be used.
... if there's an exception in @KafkaListener POJO (means YOUR processing part, not framework's), another error handler is to be utilized - the KafkaListenerErrorHandler implementation.
Yes, the Vault API and DLLs change between versions, so code compiled against AutoCAD Vault 2012 libraries will not work reliably with Vault 2015 or newer. You’ll need to reference and build against the matching Vault SDK for each version. There’s no true “write once, run everywhere” approach with Vault, but you can structure your code to use abstraction/interfaces and compile separate builds per version, or use conditional references to target multiple Vault releases.
Xcode 16:
I just waited 10-20 seconds and the code appeared.
Try to put `CREATE OR REPLACE TABLE ... AS`
before your query.
CREATE OR REPLACE TABLE your_dataset.your_table AS
WITH my_cte AS (
SELECT ...
)
SELECT * FROM my_cte;
This keeps your query the same, but saves the result into a table.
I've been having the exact same problem in NextJS with Tailwind and Typescript. Dynamically swapping between sm:flex-row
and sm:flex-row-reverse
(in my case using Template Literals and a constant defined by the index of the item in a map).
It has an inconsistent bug where suddenly, after making a change anywhere on the page, all the items that should resolve to flex-row-reverse
instead revert to being styled as the default breakpoint's flex-col
. This issue will then persist through more changes, reloads, etc., until it will randomly fix itself again. I have tried other methods of dynamically setting the direction, I've tried just putting sm:flex-row
and then only dynamically adding -reverse
to the end, with no success. I truly don't know why it's happening or how to fix it at this point.
Here's a rough outline of the code:
<ul className="flex flex-col list-none">
{arr.map((entry, index) => {
const direction = index % 2 === 0 ? 'flex-row' : 'flex-row-reverse';
return (
<li key={entry.id} className={`flex flex-col sm:${direction} border-2 p-[1rem] my-[1rem]`}>
<Image
className="object-contain border border-black"
width={300}
height={300}
priority
/>
<div>
<div className={`flex ${direction}`}>
<h2 className="border px-[0.5rem]">text</h2>
<div className="w-auto h-1/3 p-[0.25rem] border">text</div>
</div>
<div className={`flex ${direction} p-[0.5rem]`}>
<div className="mx-[1rem]">
text
</div>
<ul className="flex flex-col px-[2rem] py-[1rem] mx-[0.5rem] list-disc border">
<li>
text
</li>
<li>
text
</li>
<li>
text
</li>
</ul>
</div>
</div>
</li>
)
})}
</ul>
To solve this issue, I had to restart my computer, my system runs on Ubuntu.
System settings, Google settings, other Google apps, Assistant settings, transportation
Set default transportation mode to walking
Now Google maps will always start in walking mode
It's reported as an issue on azure-cli as well: https://github.com/Azure/azure-cli/issues/23643
"I ran into this as well while using a "fine grained PAT." I fixed it by enabling webhook read/write access."
Does using a token with more permissions resolve this issue?
Answer my question quetion using the knowledge of networking full answer please the answer should contain the related answers from other works
Turns out application is a reserved word, or rather it's not allowed as part of a form field name. The Parameter interceptor sets fields on the Action but, for security reasons, excludes any parameter name that matches the excluded parameters regex pattern. More is found at https://struts.apache.org/core-developers/parameters-interceptor#excluding-parameters
The documentation is wrong and, for 6.4.0, the exclusion patterns are the following pair of hideous monstrosities.
(^|\%\{)((#?)(top(\.|\['|\[\")|\[\d\]\.)?)(dojo|struts|session|request|response|application|servlet(Request|Response|Context)|parameters|context|_memberAccess)(\.|\[).*
.*(^|\.|\[|\'|\"|get)class(\(\.|\[|\'|\").*
Application is an object on the Value Stack and a bad person might edit parameter names to hack it.
A different exclusion pattern can be set per Action or for the whole application but, as you've discovered, it's just easier to use a different form name.
What worked for me was
camel.main.run-controller = true
This is printed in the log at startup and in https://github.com/apache/camel-spring-boot-examples/blob/main/activemq/src/main/resources/application.properties.
You can mark the containing directory of the header files as system include. Usually compilers do not complain about those (gcc, clang, MSVC).
Either using SYSTEM in
target_include_directories(<target> [SYSTEM] [AFTER|BEFORE]
<INTERFACE|PUBLIC|PRIVATE> [items1...]
[<INTERFACE|PUBLIC|PRIVATE> [items2...] ...])
or
set_target_properties(xxxDepXxx PROPERTIES INTERFACE_SYSTEM_INCLUDE_DIRECTORIES $<TARGET_PROPERTY:xxxDepXxx,INTERFACE_INCLUDE_DIRECTORIES>)
See more info on this so question:
How to suppress Clang warnings in third-party library header file in CMakeLists.txt file?
Ошибок нет, просто обновите 8.3.1 в tools - agp
Just a hint for anyone still having the same issue. If the circumstances of process death aren't important (exit code and/or signal) there is work around. One can detect a process death by secondary indicators. One can rely on the *nix feature that guarantees that on exit each process closes all of its descriptors. Knowing this one can create a simple pipe, where each end is shared by each party.
Then, a simple select on one end of the pipe will detect an event of the other end being closed (process exit). It's very common in this case to already have some communication line between parent and child (stdin/stdout/strderr or a custom pipe/unix-socket). In which case this is all that is needed here.
I suggest using this plugin developed by me, vim-simple-guifont
: https://github.com/awvalenti/vim-simple-guifont
Then you can do:
" This check avoids loading plugin when Vim is running on terminal
if has('gui_running')
silent! call simple_guifont#Set(
\['Cascadia Code PL', 'JetBrains Mono', 'Hack'], 'Consolas', 14)
endif
For more details, please check the plugin page on GitHub.
facing same issues, what combination of recent packages work? any idea?
I suggest using this plugin developed by me, vim-simple-guifont
: https://github.com/awvalenti/vim-simple-guifont
Then you can simply do:
silent! call simple_guifont#Set(['Monaco New'], 'monospace', 11)
For more details, please check the plugin page on GitHub.
I've written the occasional extension method in situations where it seemed to make sense, and I absolutely love what you can do with Linq, but I think there's one major drawback to the overuse of extension methods that no one else here has mentioned.
They can be a nightmare for Unit Testing, as they can severely complicate the process of Mocking public interfaces.
Numerous times now, I've approached writing a unit test that seems like it will be pretty straight forward. The method I'm testing has a dependency on an external service (ISomeService) and is calling the .Foo() method on that service.
So I create a mock ISomeService object using Moq or any other mocking framework, and go ahead and attempt to mock the Foo() call, only to discover that Foo() is an extension method. So now I have to dig into third party extension method code to try and figure out what actual member of the ISomeService interface is being ultimately being called by the Foo() extension method. And maybe Foo() calls Foo(int, string) which is also an extension method and that calls Foo(int, string, SomeEnumType, object[]), and on and on it goes until eventually I find where its actually calling a method that's actually a member of an the ISomeService interface.
And that's if I'm lucky enough to find that my original Foo() call ultimately maps to just one invocation on the actual interface. If I'm unlucky, I may dig through a tangled web of third party extension method code to ultimately find calls to ISomeService.Bar(), ISomeService.Blam(string, bool), ISomeService.Fizzle(object[]), and ISomeService.Whomp(IEumerable<Task<bool>>). And now I need to figure out how to mock all those invocations just to adequately mock my one simple call to the Foo() extension method.
And that's not even the worst case scenario. Sometimes that tangled web of extension methods ends up referencing and passing around instances of types that are internal to those third party libraries, so I don't even have access to directly reference the types in my mocks. And all this time I'm screaming in my head, "Did you REALLY need to put all this stuff in extension methods??? Just make Foo() part of the interface!"
So, I would say if you're working on library code that's meant to be consumed by third parties, have mercy on those of us who just want to write good, well-tested code and use extension methods sparingly.
You are right. As of now, this feature is not yet supported by the WhatsApp Business API.
Exactly what I've been searching for forever. Thanks so much.
The error message you’ve encountered might be due to your bot not getting the response in time. Take note that the chat app must send a response within 30 seconds of receiving the event.
Another reason could be that the received response was not formatted correctly. Make sure that your bot follows the JSON format or else it will reject it and return that error message. You can refer to the documentation about responding to interaction events for more information.
This project's configure file will build shared libraries by default. If you run the configure script with the --disable-shared
flag, then it will build the static (*.a) version instead.
Of course, we can come up with a solution where we will use 'c4'
, but I see the solution this way.
We can bring to the form you need with the help of an auxiliary column 'group'
. It will help us to index the values for future transformation.
Now we will write a function that will create a pd.Series
. We take the values and place them into one array using flatten()
.
def grouping(g):
return pd.Series(g[['c1', 'c2', 'c3']].values.flatten(),
index=[f'c{i+1}' for i in range(9)])
We apply the function to the grouped DataFrame
by the auxiliary column 'group'
.
Full code:
import pandas as pd
data = {
'c1': [1, 10, 100, 1, 10, 100],
'c2': [2, 20, 200, 2, 20, 200],
'c3': [3, 30, 300, 3, 30, 300],
'c4': [0, 1, 2, 0, 1, 2]
}
df = pd.DataFrame(data)
df['group'] = df.index // 3
def grouping(g):
return pd.Series(g[['c1', 'c2', 'c3']].values.flatten(),
index=[f'c{i+1}' for i in range(9)])
transformed_df = df.groupby('group').apply(grouping).reset_index(drop=True)
I have updated the sub posted by @xiaoyaosoft using the CompactDatabase method as suggested by @June7. Tested for all combinations of setting changing, or removing a password, and also encrypting or decrypting the database. I didn't look at the database files at a low level after any of these changes to see what effect the encryption process has - I merely verified that it could be opened and read in Access after each change.
' Procedure : Set_DBPassword
' Author : Daniel Pineault, CARDA Consultants Inc.
' Website : http://www.cardaconsultants.com
' Purpose : Change the password of any given database
' Copyright : The following may be altered and reused as you wish so long as the
' copyright notice is left unchanged (including Author, Website and
' Copyright). It may not be sold/resold or reposted on other sites (links
' back to this site are allowed).
'
' Input Variables:
' ~~~~~~~~~~~~~~~~
' sDBName : Full path and file name with extension of the database to modify the pwd of
' sOldPwd : Existing database pwd - use "" if db is unprotected
' sNewPwd : New pwd to assign - Optional, leave out if you wish to remove the
' existing pwd
' bEncrypt : Encrypt the database if adding a new password, or decrypt it if removing
' (has no effect if only changing an existing password)
'
' Usage:
' ~~~~~~
' Set a pwd on a db which never had one
' Set_DBPassword "C:\Users\Daniel\Desktop\db1.accdb", "", "test"
'
' Clear the password on a db which previous had one
' Set_DBPassword "C:\Users\Daniel\Desktop\db1.accdb", "test", "" 'Clear the password
'
' Change the pwd of a pwd protected db
' Set_DBPassword "C:\Users\Daniel\Desktop\db1.accdb", "test", "test2"
'
' Revision History:
' Rev Date(yyyy/mm/dd) Description
' **************************************************************************************
' 2 2025-Aug-20 Made work for to or from blank password (by MEM)
' 1 2012-Sep-10 Initial Release
'---------------------------------------------------------------------------------------
Private Sub Set_DBPassword(sDBName As String, sOldPwd As String, Optional sNewPwd As String = "", Optional bEncrypt As Boolean = False)
On Error GoTo Error_Handler
Dim db As DAO.Database
'Password can be a maximum of 20 characters long
If Len(sNewPwd) > 20 Then
MsgBox "Your password is too long and must be 20 characters or less." & _
" Please try again with a new password", vbCritical + vbOKOnly
GoTo Error_Handler_Exit
End If
'Could verify pwd strength
'Could verify ascii characters
If sOldPwd = vbNullString And sNewPwd <> vbNullString Then ' use temporary file
If bEncrypt Then
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd, dbEncrypt
Else
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd
End If
Kill sDBName
Name sDBName & ".$$$" As sDBName
ElseIf sOldPwd <> vbNullString And sNewPwd = vbNullString Then ' use temporary file database
If bEncrypt Then
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd, dbDecrypt, ";pwd=" & sOldPwd
Else
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd, , ";pwd=" & sOldPwd
End If
Kill sDBName
Name sDBName & ".$$$" As sDBName
Else
Set db = OpenDatabase(sDBName, True, False, ";PWD=" & sOldPwd) 'open the database in exclusive mode
db.NewPassword sOldPwd, sNewPwd 'change the password
End If
Error_Handler_Exit:
On Error Resume Next
kill sDBName & ".$$$"
db.Close 'close the database
Set db = Nothing
Exit Sub
Error_Handler:
'err 3704 - not able to open exclusively at this time, someone using the db
'err 3031 - sOldPwd supplied was incorrect
'err 3024 - couldn't locate the database file
MsgBox "The following error has occurred." & vbCrLf & vbCrLf & _
"Error Number: " & Err.Number & vbCrLf & _
"Error Source: Set_DBPassword" & vbCrLf & _
"Error Description: " & Err.Description, _
vbCritical, "An Error has Occurred!"
Resume Error_Handler_Exit
End Sub
'from :https://www.devhut.net/ms-access-vba-change-database-password/
Use the Oauth instead don't use the PAT it's not recommended/not secure
here are the steps
https://docs.databricks.com/aws/en/admin/users-groups/manage-service-principals
Not all of the mapbox examples include all the elements in a form, but in the documentation, it's states "Eligible inputs must be a descendant of a element,".
I had missed the form requirement the first time and got inconsistent performance. Once I wrapped those fields in a form, it all worked well.
This might be an Xcode issue. I found an open GitHub thread that seems related to your issue, and it includes a list of workarounds from other developers that might help you gain insights into the issue you're encountering.
I have the same problem. Since Anaconda managed my Python environment, I generated a requirement.txt file using pip, and created a virtual environment as indicated by PzKpfwlVB including auto_py_to_exe. As a result, I was able to generate the executable file including pyside6.
I was able to correctly convert diacritic characters to their lowercase counterparts by creating the following auxiliary table with two columns and inserting pairs of diacritic characters in lower and upper case.
CREATE TABLE Latin1Accents (
UCASE STRING,
LCASE STRING
);
INSERT INTO Latin1Accents (UCASE, LCASE) VALUES
('À', 'à'),
('Á', 'á'),
('Â', 'â'),
('Ã', 'ã'),
('Ä', 'ä'),
('Å', 'å'),
('Ç', 'ç'),
('È', 'è'),
('É', 'é'),
('Ê', 'ê'),
('Ë', 'ë'),
('Ì', 'ì'),
('Í', 'í'),
('Î', 'î'),
('Ï', 'ï'),
('Ñ', 'ñ'),
('Ò', 'ò'),
('Ó', 'ó'),
('Ô', 'ô'),
('Õ', 'õ'),
('Ö', 'ö'),
('Ù', 'ù'),
('Ú', 'ú'),
('Û', 'û'),
('Ü', 'ü'),
('Ý', 'ý');
A CASE is used to verify if the UNICODE representation of the first character in the string is above 127, in order to find out if the character could be a diacritic one. If not, the lower function is used to convert the character. However, if the UNICODE value is above 127, a subquery is used to look for the lowercase representation of that character in the Latin1Accents auxiliary table. If the lowercase character could not be found in that table, the original character is returned.
SELECT Customer_Name,
SUBSTR (Customer_Name,1,1) as "First Letter", UNICODE (SUBSTR (Customer_Name,1,1)) ,
CASE
WHEN UNICODE (SUBSTR (Customer_Name,1,1)) > 127 THEN
(SELECT CASE WHEN LCASE IS NULL THEN SUBSTR (Customer_Name,1,1) ELSE LCASE END
FROM Latin1Accents
WHERE UCASE = SUBSTR (Customer_Name,1,1) )
ELSE
LOWER (SUBSTR (Customer_Name,1,1))
END as "First Letter in Lowercase"
FROM Customer
2-Install the package:
pip install supervision
# or
pip3 install supervision
In https://deepai.org/machine-learning-glossary-and-terms/affine-layer I have found the following:
"Mathematically, the output of an affine layer can be described by the equation:
output = W * input + b
where:
W is the weight matrix.
input is the input vector or matrix.
b is the bias vector.
This equation is the reason why the layer is called "affine" – it consists of a linear transformation (the matrix multiplication) and a translation (the bias addition)."
Solution does not work for forge 1.21.8
cannot find symbol
symbol: method getModEventBus()
location: variable context of type FMLJavaModLoadingContext
Maybe it's the new version of API Platform (4.1 now), but the config parameter worked for me as charm. Just where you'd written.
The main point for me was to find all ManyToMany relations )
When you are using Bramus Router in PHP, and defining routes that point to a static method , that static method must be declared with public
visibility.
Example:
$router->get(
pattern: '/hr/users',
fn: [UserController::class, 'index']
);
When installing Git and Visual Studio Code, please refer to their official documentation to install it. I would recommend doing these rather than installing Ubuntu's snap packages as they tend to be slower in performance.
And of course, you are using Ubuntu which is Debian-based so installing a debian package should be great and smooth for you.
For Git:
https://git-scm.com/downloads/linux
#For the latest stable version for your release of Debian/Ubuntu
apt-get install git
# For Ubuntu, this PPA provides the latest stable upstream Git version
add-apt-repository ppa:git-core/ppa
apt update; apt install git
For Visual Studio Code:
Download the .deb (Debian, Ubuntu) under Linux category.
https://code.visualstudio.com/download
# After downloading it, install it using the following command
sudo apt install ./code-latest.deb
After doing all of these, it should be working out of the box now. Enjoy coding!
JEP-488 released with Java 24, adds support for primitive types in instanceof
operators. This means you can now simply write:
b instanceof byte
To upload a Next app to IIS, you can follow these steps:
1- Install the following modules
IIS NODE
https://github.com/Azure/iisnode/releases/tag/v0.2.26
URL REWRITE
https://www.iis.net/downloads/microsoft/url-rewrite
Application Request Routing
https://www.iis.net/downloads/microsoft/application-request-routing
2- Create a folder on your C disk and pass the following:
The .next folder you get when you do npm run build.
The public folder
The node_modules
3- Create a server.js file in your folder with the following information.
const { createServer } = require("http");
const { parse } = require("url");
const next = require("next");
const dev = process.env.NODE_ENV !== "production";
const port = process.env.PORT ;
const hostname = "localhost";
const app = next({ dev, hostname, port });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer(async (req, res) => {
try {
const parsedUrl = parse(req.url, true);
const { pathname, query } = parsedUrl;
if (pathname === "/a") {
await app.render(req, res, "/a", query);
} else if (pathname === "/b") {
await app.render(req, res, "/b", query);
} else {
await handle(req, res, parsedUrl);
}
} catch (err) {
console.error("Error occurred handling", req.url, err);
res.statusCode = 500;
res.end("internal server error");
}
})
.once("error", (err) => {
console.error(err);
process.exit(1);
})
.listen(port, async () => {
console.log(`> Ready on http://localhost:${port}`);
});
});
4- Configuration in the IIS
We check if we have our modules installed; we do this by clicking on our IIS server.
Then we click on modules to see the IIS NODE.
After that, we select feature delegation.
and verify that the controller mappings are read and write.
then we create our website in the IIS and reference the folder we created, click on the website and enter controller assignments
Once inside we click on add module assignments, in Request Path we put the name of the js file in this case "server.js", in module we select iisnode and in name we place iisnode.
We give him accept; This will create a configuration file in our folder called "Web". Open it and place this:
<!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
<rule name="StaticContent">
<action type="Rewrite" url="public{REQUEST_URI}"/>
</rule>
<!-- All other URLs are mapped to the node.js site entry point -->
<rule name="DynamicContent">
<conditions>
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
</conditions>
<action type="Rewrite" url="server.js"/>
</rule>
</rules>
</rewrite>
<!-- 'bin' directory has no special meaning in node.js and apps can be placed in it -->
<security>
<requestFiltering>
<hiddenSegments>
<add segment="node_modules"/>
</hiddenSegments>
</requestFiltering>
</security>
<!-- Make sure error responses are left untouched -->
<httpErrors existingResponse="PassThrough" />
<iisnode node_env="production"/>
<!--
You can control how Node is hosted within IIS using the following options:
* watchedFiles: semi-colon separated list of files that will be watched for changes to restart the server
* node_env: will be propagated to node as NODE_ENV environment variable
* debuggingEnabled - controls whether the built-in debugger is enabled
See https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config for a full list of options
-->
<!--<iisnode watchedFiles="web.config;*.js"/>-->
</system.webServer>
We stop our site in the IIS, update and upload the site.
Not necessarily. You can override the certificate check with a change to the sys_properties... helps with troubleshooting connections when certs are an issue...
com.glide.communications.httpclient.verify_hostname = false
You need an _eq_ in Point. Otherwise, Python is comparing two objects by their addresses, and always coming up False.
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __eq__(self, other):
if not isinstance(other, Point):
return False
return self.x == other.x and self.y == other.y
$sql = "SELECT SUBSTRING(SUBSTRING_INDEX(`COLUMN_TYPE`,'\')',1),6) as set_str FROM `information_schema`.`COLUMNS` WHERE `TABLE_SCHEMA` = 'db' AND `TABLE_NAME` = 'tbl' AND `COLUMN_NAME` = 'col'";
$set_h = mysqli_query($c,$sql);
if (!$set_h){
echo "ERROR: ".$sql;
return;
}
if (mysqli_num_rows($set_h)!=0){
$set_str = mysql_result($set_h,0,'set_str');
echo $set_str.'<br>';
$type_a = explode("','",$set_str);
print_r($type_a);
}
return;
same issue im also facing pls someone help
Getting below error with WebDriverIO v9.19.1.
By default the credentials are being set with my company’s user profile. How to fix this?
Please remove "incognito" from "goog:chromeOptions" args* as it is not supported running Chrome with WebDriver.
WebDriver sessions are always incognito mode and do not persist across browser sessions.
JEP-488 released with Java 24, adds support for primitive types in instanceof
operators. This means you can now simply write:
b instanceof byte
Figured it out eventually. The company uses a complicated system of profiles, and the profile used for tests contains a (well-hidden) spring.main.web-application-type: none
. Which earlier did not prevent MockMVC from autowiring somehow. But now this property is parsed correctly, and the application is not considered a web application, so no WebApplicationContext, so no MockMVCs. The solution is to set type to servlet.
This is what worked for me. Put the entry.Text inside the Dispatcher.
https://github.com/dotnet/maui/issues/25728
Dispatcher.DispatchAsync(() =>
{
if (sender is Entry entry)
{
entry.Text = texto.Insert(texto.Length - 2, ",");
entry.CursorPosition = entry.Text.Length;
}
});
I'm also facing the same issue in a Next.js project.
❌ Problem
When trying to use node-llama-cpp
with the following code:
import { getLlama, LlamaChatSession } from "node-llama-cpp";
const llama = await getLlama();
I got this error:
⨯ Error: require() of ES Module ...node_modules\node-llama-cpp\dist\index.js from ...\.next\server\app\api\aa\route.js not supported. Instead change the require of index.js to a dynamic import() which is available in all CommonJS modules.
✅ Solution
Since node-llama-cpp
is an ES module, it cannot be loaded using require()
in a CommonJS context. If you're working in a Next.js API route (which uses CommonJS under the hood), you need to use a dynamic import workaround:
const nlc: typeof import("node-llama-cpp") = await Function(
'return import("node-llama-cpp")',
)();
const { getLlama, LlamaChatSession } = nlc
This approach uses the Function
constructor to dynamically import the ES module, bypassing the CommonJS limitation.
📚 Reference
You can find more details in the official node-llama-cpp troubleshooting guide.
https://github.com/thierryH91200/WelcomeTo and also
https://github.com/thierryH91200/PegaseUIData
I wrote this application in Xcode style.
look at class ProjectCreationManager
maybe this can help you
I do not know what buildozer version do you have instaled but distutils is been used in buildozer/targets/android.py https://github.com/kivy/buildozer/blob/7178c9e980419beb9150694f6239b2e38ade80f5/buildozer/targets/android.py#L40C1-L40C43
The link I share is from the latest version of that package so I think you should or ignore the warning or look for an alternative for that package.
I hope it helps
I have seen this when working with java classes. If I try to set the breakpoint on the member variable declaration then it adds a watchpoint instead of a breakpoint. It doesn't seem to matter if I am both declaring and initializing the member variable or just declaring it ... it will only add watchpoints at that location.
If inside one of my member methods I try to add a breakpoint to a local variable declaration it will not add anything at all. Inside the member method if I try to add a breakpoint on just about any other line of code, such as assigning a value to the local or member variable, then it does set a breakpoint.
After trying using even WinDbg, I got it working by recreating my Release profile and building it as Release. I don't know why Debug was not working, if it was a debug define
that Visual Studio set, or one of the many different flags used between both profiles. An exact copy from the Debug profile didn't work, so I had to make an actual Release, including optimization flags, etc. Both the Dll and the Console App started working after compiling as Release.
This is why you should never transfer binary numbers between different systems. You can also run into codepage issues, Always convert the numbers to character format.
In your example, it will not stop executing on the first reply. Meaning that your code will continue through the rest of the function which @dave-newton mentioned in their comment. The docs they show it is not necessary, so I think the correct way to structure it is like this:
if (!transaction) {
return rep.code(204).send({ message: 'No previously used bank found' })
}
rep.code(200).send(transaction) // don't send if equals !transaction
I believe this is specific to using an emulator API level less than 36 with more recent versions of firebase.
I think "NetworkCapability 37 out of range" is referencing Android's NetworkCapabilities.NET_CAPABILITY_NOT_BANDWIDTH_CONSTRAINED (constant: 37) which was added in API 36.
I got the same error when using an API 35 emulator and it went away when using API 36.
Thank you @Glad1atoR and @j.f. for pointing out it was an emulator specific error.
That means airflow was not able to pull the data, probably you havent placed the xcom data in the right place. also note that you would not find anything in the xcom side car logs, The objective of the xcom side car is to keep the pod alive while the airflow pulls xcom values. check the watcher pod-logs. Probably it will tell you why.
In the first case, the pattern works 2 times: for 30 x 4
and for 0 x 9
00
In the second case, only once for 3 x 4
and then for x 90
there is no longer a match.
And in general, isn't it easier to replace spaces with the necessary characters?
local s = "3 x 4 x 90"
s = s:gsub('%s+', '\u{2009}')
print (s)
Result:
3u{2009}xu{2009}4u{2009}xu{2009}90
This is an ugly one-liner that I use to calculate the maximum number of parallel jobs I can build flash-attn on a given nvidia/cuda machine. I've had to build on machines that were RAM-constrained and others that were CPU-constrained (ranging from Orin AGX 32GB to 128vCPU AMD with 1.5TB RAM and 8xH100).
Each job maxes out around 15GB of RAM, and each job will also max out around 4 threads. The build will likely crash if we go over RAM, but will just be slower if we go over threads. So I calculate the lesser of the two for max parallelization in the build.
YMMV, but this is the closest I've come to making this systematic as opposed to experiential.
export MAX_JOBS=$(expr $(free --giga | grep Mem: | awk '{print $2}') / 15); proc_jobs=$(expr $(nproc) / 4); echo $(($proc_jobs < $mem_jobs ? $proc_jobs : $mem_jobs)))
Thanks @John Polo for spotting an issue. Eventually, I realized there were other two important problems in the loop I was creating:
The subset() call was not triggering any warning so it could not work properly anyway. In order to trigger it, I had to call values() but to avoid printing the results I embedded it into invisible(). I could have called other functions too to trigger the warning, but I opted for values() believing it to be pretty fast.
You cannot assign a matrix to a SpatRaster layer. So what I did was just to subset the original SpatRaster to that specific layer and fill it with NAs.
Here it is the solution to my problem in the form of a function. Hopefully others might find it helpful, even though I am sure some spatial analysis experts could come up with a way more efficient version.
# Function to repair a possibly corrupted SpatRaster
fixspatrast <- function(exporig) {
# Create a new SpatRaster with same structure, all NA
expfix <- rast(exporig, vals = NA)
for (lyr in 1:nlyr(expfix)) {
tryCatch({
lyrdata <- subset(exporig, lyr)
invisible(values(lyrdata)) # force read (triggers warning/error if unreadable)
expfix[[lyr]] <- lyrdata
}, warning = function(w) {
message("Warning in layer ", lyr, ": ", w$message)
nalyr <- subset(exporig, lyr)
nalyr[] <- NA
names(nalyr) <- names(exporig)[lyr] # keep name
expfix[[lyr]] <- nalyr
})
}
return(expfix)
}
One way around to avoid this error is the following:
crs_28992 <- CRS(SRS_string = "EPSG:28992")
slot(d, "proj4string") <- crs_28992
*You need packge 'sp'
** Applied with OS Windows 11
When you connect with the GUI, you select "Keep the VM alive for profiling" in the session startup dialog:
Just in case it can help everyone in this thread, I created a package based on the @AntonioHerraizS and @Ben answers to easily print any request or response easily with rich support.
If anyone is interested, you can find it here: github.com/YisusChrist/requests-pprint. The project currently supports requests and aiohttp libraries (with their respective cache-versions as well).
I hope you will find it helpful.
There is an issue related to this which is already raised in the repo : https://github.com/Azure/azure-cli/issues/26910
If you are looking for an alt option you can look into this documentation : https://docs.azure.cn/en-us/event-grid/enable-identity-system-topics
Which does it through the interface.
This solved the issue for me:
Go to your extensions
Search for GitHub Copilot
Click disable
Restart the extension
You can help out type inference by tweaking makeInvalidatorRecord
to be type-parameterized over the invalidators' argument types. This is easier to explain in code than prose.
Instead of this:
type invalidatorObj<T extends Record<string, unknown>> = {
[K in keyof T]: T[K] extends invalidator<infer U> ? invalidator<U> : never;
};
You could use a type like this:
type invalidatorObjFromArgs<T extends Record<string, unknown[]>> = {
[K in keyof T]: invalidator<T[K]>
};
And then have the type parameter of makeInvalidatorObj
be a Record
whose property values are argument tuples rather than entire invalidators.
Here's a complete example:
type invalidator<TArgs extends unknown[]> = {
fn: (...args: TArgs) => any;
genSetKey: (...args: TArgs) => string;
};
type invalidatorObjFromArgs<T extends Record<string, unknown[]>> = {
[K in keyof T]: invalidator<T[K]>
};
const makeInvalidatorObj = <T extends Record<string, unknown[]>>(
invalidatorObj: invalidatorObjFromArgs<T>,
) => invalidatorObj;
const inferredExample = makeInvalidatorObj({
updateName: {
fn: (name: string) => {},
genSetKey: (name) => `name:${name}`,
// ^? - (parameter) name: string
},
updateAge: {
fn: (age: number) => {},
genSetKey: (age) => `age:${age}`,
// ^? - (parameter) age: number
},
});
I think it has to do with gcc
version, cuda 12.x is supported on gcc-12
and debian 13 provides gcc-14
as default.
An alternative is to use the robocopy command.
robocopy ".\Unity Editors" "C:\Program Files\Unity Editors" /e
This will create the Unity Editors folder if it does not exist, but will not alter it, if it does not.
The only downside is you need to have the directory structure to copy, as it will not let you create directories from nothing. The '/e' means it only does/copies directories but ignores any files.
i.e. I want to create directory d, at the end of C:>a\b\c so I create the d folder, but I do not want to disturb any folders or files, nor create an error if it already exists.
.\md d
.\robocopy ".\d" "C:\a\b\c" /e
.\rm d
What I like about robocopy is it plays nice with batching things on SCCM and such. Its also quite powerful.
Maybe it will be useful for someone, I used postgresql with jdbc, and I had error messages not in English. The VM options -Duser.language=en -Duser.country=US
helped me
Yo utilizo el siguiente y me ha funcionado bien para Android es una forma mucho mas amigable al momento de programarlo como tambien se pueda configurar mucho mas amigable entre la app y la impresora https://pub.dev/packages/bx_btprinter
I'm having the same problem, but with framework nextjs. It is not copying a crucial directory called _next with my embedded website exported.
I don't know why, but I tried many options on build.gradle inside android block, like aaptOptions, androidResource, both inside defaultConfig, inside buildTypes.release, any of it worked.
I found a solution for this reading the AAPT source code (here) it defines the ignore assets pattern like this:
const char * const gDefaultIgnoreAssets =
"!.svn:!.git:!.ds_store:!*.scc:.*:<dir>_*:!CVS:!thumbs.db:!picasa.ini:!*~";
Also in this source code, it reads the env var ANDROID_AAPT_IGNORE, and using this env var works!
So, you can set this ENV before, or export it in your .bashrc or .zshrc, or use it inline when calling the gradle assembleRelease like this:
ANDROID_AAPT_IGNORE='!.svn:!.git:!.ds_store:!*.scc:.*:!CVS:!thumbs.db:!picasa.ini:!*~' ./gradlew assembleRelease
`layout` parameter of `plt.figure` can be used for this purpose.
for instance, following example dynamically adjusts the layout to stay tight after window is resized.
import matplotlib.pyplot as plt
x = range(10)
y1 = [i ** 2 for i in x]
y2 = [1.0 / (i + 1) for i in x]
fig = plt.figure(tight_layout=True)
ax1 = plt.subplot(1, 2, 1)
ax1.plot(x, y1)
ax2 = plt.subplot(1, 2, 2)
ax2.plot(x, y2)
plt.show()