Starting with MAMP Pro 7.1 mysqldump moved to this location:
/Applications/MAMP/Library/bin/mysql80/bin/mysqldump
Call it explicitly:
/Applications/MAMP/Library/bin/mysql80/bin/mysqldump --host=localhost -uroot -proot db_name > /path/to/db_name_backup.sql
Same here man...did it get fixed? If so, how did you fix it? Thanks for the help
const {email} = req.body // returns "[email protected]"
const user = await User.findOne({email})
// Try writing it like this.
const userEmail = req.body // returns "[email protected]"
const user = await User.findOne({email:userEmail})
//consdering there is a email column named email in db
QSocketNotifier: Can only be used with threads started with QThread Segmentation fault (core dumped)
I get this using Actiona and Ubuntu 22.04 This distro using Snap application.
I would call it a non-deterministic issue. I hate code execution mysteries. Probably some data driven edge case that your dependencies are hitting. Hard to repro, hard to find.
The above works perfectly when searching notes on a worksheet. If searching a range & you want to know if there is no comment in a particular cell, try using NoteText instead. dim a as string a = Worksheets("Sheet1").Range("A1").NoteText
I cannot believe this answer is not on stack overflow. After months of trying and giving up, I finally saw someone on GitHub say to use sudo. I used sudo and finally worked. can't believe it was as simple as adding sudo on such a high headache problem that I couldn't find the solution to.
sudo npx expo start --tunnel
I had this exact issue on my linux PC with zsh. Adding the following into ~/.zshrc
resolved the issue:
export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/ssh-agent.socket"
if ! pgrep -u "$USER" ssh-agent > /dev/null; then
eval "$(ssh-agent -s)"
fi
ssh-add -q ~/.ssh/id_personal
The root cause is that your lines
$a
and
$b
generate uncaptured output in the top-level script context and so result in an implicit call to Out-Default
to display the output on the console.
This is then passing the whole output from both lines into a single call to Format-Table
which has a quirk that it waits for 300ms for more data to arrive before it decides which columns to display. It looks like in that 300ms only the data from $a
is received, so it's locking the columns down to Name
and Group
. When the output from $b
is received it doesn't automatically add the GroupMembership
column.
@Santiago Squarzon's answer works around this by aligning the property names in $a
and $b
so the columns determined by Format-Table
are consistent across all of the output.
Another option is to explicitly pipe the individual variables into Format-Table
like this:
$a | format-table
...
$b | format-table
which will render two separate tables with their own columns calculated based on input to each separate call to format-table
, and will result in this on the console:
Name Group
---- -----
D2\\[email protected] {ADMINS, WebService}
D2\\[email protected] WebService
D2\\[email protected] WebService
D2\\[email protected] ADMINS
D2\\[email protected] WebService
Name GroupMembership
---- ---------------
D2\\[email protected] {ADMINS, WebService}
D2\\[email protected] WebService
D2\\[email protected] WebService
D2\\[email protected] WebService
See these links for more gory technical details:
Same issue,
Disabling works, thoughts?
# Import and Disable Default Repo
data "azuredevops_git_repository" "lab_001_default" {
project_id = azuredevops_project.lab_001.id
name = azuredevops_project.lab_001.name
}
resource "azuredevops_git_repository" "lab_001_default" {
project_id = azuredevops_project.lab_001.id
name = azuredevops_project.lab_001.name
disabled = true
initialization {
# I assume the default is Uninitialized, but this is ignore_changes so I
# dont think we should care.
init_type = "Uninitialized"
}
lifecycle {
ignore_changes = [
# Ignore changes to initialization to support importing existing repositories
# Given that a repo now exists, either imported into terraform state or created by terraform,
# we don't care for the configuration of initialization against the existing resource
initialization,
]
}
}
import {
id = join("/", [
data.azuredevops_git_repository.lab_001_default.project_id,
data.azuredevops_git_repository.lab_001_default.id
])
to = azuredevops_git_repository.lab_001_default
}
For anyone on a Mac you will need to do the following.
sudo npm cache clean -f
npm update
npm update -g @vue/cli
sudo vue create app-name
Apparently Vue only likes sudo commands for mac and linux.
It does not appear to be a syntax or linting warning, nor does it resemble any typical highlight associated with code cells or markdown.
For me, it does, the sections written in color show there is a code cell with a warning or an error (I also use Pylance).
For example, here I don't have error or warning :
With a warning, I have an orange text and an orange circle :
With an error, I have a red text and a red circle :
make any headway on this? I'm curious as well.
As of December 31, 2024, if you're following older Spring tutorials, you may run into this issue:
In the past, when you selected the "gateway" dependency in Spring Initializr, the artifact included was spring-cloud-starter-gateway-mvc. This worked for some older tutorials. However, this will not work now if the tutorial expects the reactive gateway.
If things start to fail and you're wondering why—this is likely the issue!
The correct artifact is spring-cloud-starter-gateway, which comes when you select "Reactive Gateway" in Spring Initializr.
Because of the noise in the experimental data, I thought it would be easier to work with np.interp() to interpolate the data:
x = np.linspace(0, 32, 100)
interCurve = np.interp(x, bias_voltage, dark_current)
derivB = np.gradient(interCurve[:-1], x[:-1])
plt.plot(x, interCurve, label='interpolated curve')
plt.scatter(bias_voltage, dark_current, marker='x', color='g', s=6, label='experimental points')
plt.plot(x[:-1], derivB, label='derivative of interpolated curve')
plt.legend()
plt.show()
peerdb works with non-hosted clickhouse instances. In fact, our CI just runs stock clickhouse on CI:
& then e2e peer setup: https://github.com/PeerDB-io/peerdb/blob/60e80b822ec284224ccb87ee008a33201d42c85d/flow/e2e/clickhouse/clickhouse.go#L67
peerdb docker-compose files include minio to serve as s3 staging, if you're running peerdb outside of that environment you'll need to configure an s3 bucket for clickhouse
It can be awkward to connect to localhost if you're running peerdb inside docker & postgres outside docker. Would have to know more about your setup to help further
WP and Woo w/ HPOS are recent versions. Running PHP 7.4
(1) Do you have an answer for why WP/WOO's maybe_serialize doesn't start with the a: ... serialized data as described, above? Instead, it's 2 sets of serialized data, not one.
I used maybe_serialize([array here]) and the actual data in the database start with
s:214"
and ends with
";
The actual serialized data are in between. (Note: The "214" depends on the size of the array keys and values).
If I used PHP's serialize command before sending it to the database, actual serialized data are stored as you described without the starting s:214" and ending ";
Why is that?
In an external program needing the data, if I send the serialized data through PHP's unserialize, it won't unserialize. (Try it at unserialize.com). It has to be done a second time (taking up unnecessary resources and knowledge that it's double-serialized. Future programmers may not be aware of that.).
(2) In the above example, the serialized data are $order->update_meta_data('item_shipping_data', $data_serialized);
QUESTION Do I really need to serialize or maybe_serialize the data before running $order->update_meta_data()?
QUESTION for reading the data - does WP/Woo automatically unserialize it using $order->get_meta('meta_key_here');
(3) One step further, using PHP's serialize, In WP/Woo how would I add $mysqli->real_escape_string() to cleanse the serialized data for the database in WP/Woo to avoid the double serializing? This question is for other places we may need to store serialized data other than $order->update_meta_data().
Thank you for your thoughtful answers!
The solution is perfect, THANK YOU! Tested with TYPO3 13
Check that your build variant is set to debug and not release. In Android Studio go to the Build menu > Select Build Variant > in the Build Variants window set the 'Active Build Variant' for module ':app' to Debug.
If you have it set to Release it is likely not working because your build.gradle file has the 'debuggable' attribute set to false.
src/codejam2011/round1c/B.in Gregory D Dudley
As already mentionned in this topic, a process that runs with PID 1 in its own pid namespace inherits a specific behaviour on how to deal with SIGINT
and SIGTERM
which is to ignore them.
This is precisely what happens when running a docker container, but not limited to it.
For example, run this command in a shell as root :
# unshare --pid --fork --mount-proc sleep infinity
This runs a sleep infinity
command in its own pid namespace. You can verify it running the lsns
command in another shell.
# lsns
NS TYPE NPROCS PID USER COMMAND
4026532363 pid 1 292 root sleep infinity
If you tries to send a SIGINT
to this process (with Ctrl+C
in the first shell, or with the kill -s SIGINT <PID>
command in the second shell), it will has no effect.
If you want to get rid of this process, you have to hard-kill it with kill -s SIGKILL <PID>
command in the second shell.
You can check that this process was running with PID 1 in its pid namespace running the ps
command the same way.
# unshare --pid --fork --mount-proc ps
PID TTY TIME CMD
1 pts/0 00:00:00 ps
Essentially you can observe the same.
# docker run -d --rm --name ubuntu ubuntu sleep infinity
d13fc1da3609407332c511f68d5b0513b31fa55df2e9b545044f53bfd0b2dc4b
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13fc1da3609 docker.io/library/ubuntu:latest sleep infinity 2 seconds ago Up 3 seconds ubuntu
# lsns
4026532384 pid 1 1062 root sleep infinity
Try killing the sleep infinity
process with SIGHUP
, SIGTERM
and SIGKILL
will result in the same behaviour as previously explained, because this process is running with PID 1 in it's own pid namespace.
# docker exec ubuntu ps x
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 sleep infinity
2 ? R 0:00 ps x
docker stop
does ?Without any fancy option, sends a SIGTERM
to the process running with PID 1 in the container pid namespace. If the process is still running after a 10 seconds timeout, sends a SIGKILL
.
This is why a container that runs a process that does not handle signals properly is slow to stop. The first signal is ignored, the second is not.
Documentation here : Docker stop docs
You can verify it with the commands :
# TIMEFORMAT="==> Execution time = %Rs"
# time docker stop ubuntu
ubuntu
==> Execution time = 10.518s
The simplest way consists in using the --init
option when creating the container, which add a binary developped on the tini GitHub project in the newly created container and run it (with PID 1 in the container pid namespace) and asks it to run as a fork the command to run in the container.
Running the same commands as before show this :
# docker run --init -d --rm --name ubuntu ubuntu sleep infinity
27fc4026c264f48c8ee148796f77e7705411691845e4267467b5bc9f2aba609a
# docker exec ubuntu ps x
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /sbin/docker-init -- sleep infinity
7 ? S 0:00 sleep infinity
8 ? Rs 0:00 ps x
A simple docker stop is very quick, showing that the SIGTERM
signal is handled by the docker-init
process which kills its forks and gracefully stops.
# time docker stop ubuntu
ubuntu
==> Execution time = 0.501s
docker --init
option ?You want to make sure that your init
process declares it's own signal handlers. If you're planning to run a simple sleep infinity
command in your container, you can wrap it a bash script that runs the trap
command prior.
BUT when you run the exec sleep
command from bash, the sleep binary code is run in a blocking way, meaning that it waits to finish before the signals are interpreted again. As a consequence, the trap
command becomes uneffective.
A workaround could consist in using a non blocking (signal responsive) waiting command, like read
when reading from an read/write opened unix pipe created with mkfifo
.
Note that you can simlink a file descriptor to this unix pipe file (and even delete it !) to preserve a non-blocking read
without polluting your container with unecessary pipe files.
This is an example :
#!/bin/bash
trap "exit 0" SIGINT SIGTERM
tmpdir="$(mktemp -d)"
mkfifo "$tmpdir/pipe"
exec 3<>"$tmpdir/pipe"
rm -r "$tmpdir"
read -u3
Put this content in a scripts/run.sh
file on you docker host and do not forget to chmod +x
it.
And now, let's run the whole bunch of commands previously mentionned, using this script as the "init" program, with PID 1 in the container.
# docker run -d --rm -v "$PWD/scripts:/scripts" --name ubuntu ubuntu /scripts/run.sh
8d947443ae6eaf0093378ffb4480c3a67ea221ff240bab251d9f92c9216385f6
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13fc1da3609 docker.io/library/ubuntu:latest sleep infinity 2 seconds ago Up 3 seconds ubuntu
# lsns
4026532384 pid 1 2551 root /bin/bash /scripts/run.sh
# TIMEFORMAT="==> Execution time = %Rs"
# time docker stop ubuntu
ubuntu
==> Execution time = 0.441s
Here's a quick docker stop
, without the --init
option, mimicking the sleep command with bash, with the necessary signal handling to stop without hard kill. :-)
Short answer : not a good idea. It is the responsibility of the init process (with PID1 in its pid namespace) on a linux system to reap zombie processes forked from it. Of course the given minimalistic bash script above does not this. More informations about zombie processes at : this link
You can spawn a 100 seconds zombie process adding the (sleep 1 & exec sleep 101) &
command before the read
command in the previous bash script and show it with docker exec ubuntu ps fx
.
Your init process in your container must handle signals properly and reap zombie processes. The --init
option in the docker command line ensures that.
I was losing my sanity until I thought of changing the object from a list to a set:
class AuthorAdmin(admin.ModelAdmin):
inlines = ( BookInline, )
I'm using python 3.12 and Django 5
Have you found the problem with that?
Unfortunately most commercial server companies are not going to change ini settings for individual site preferences. Changing ini settings is pretty much a superficial answer. Sure fine for your own test servers, trying going to a live commercial server and asking the Admins to change any of the ini files.
We are going to investigate php's ability to read the ini variables and set those at limits with obvious error warnings prior to processing.
Thanks for the answer though.
Issue has been identified, we are now working around the problem as the software once completed is going to be in the public domain.
When a Session is created a connection resource is requested from the Engine. The connection remains open until the transaction completes, which can happen when a rollback or commit is called. In the case of autocommit, the commit occurs immediately after a statement is processed. At this point the transaction ends and the underlying connection resource is returned back to the connection pool.
Based on my understanding of how SQLAlchemy manages its connection pools, it seems safe to not explicitly close sessions. GC would clean up any Session objects that were no longer referenced. But there's no advantage to keeping sessions alive, that I'm aware of, and best practice is normally to close resources when they're no longer needed.
Adding to the suggestion from @Kellen, I had to ask a new question to figure out exactly how to access this state as it is not exposed via the api. The answer is here https://stackoverflow.com/a/79315889/9625 (Thanks to @MrOnlineCoder)
For completeness I am posting the the code-snippet in case anyone finds this question via Google as I did.
<p-datatable :value="customers" :filters="customerFilters" ref="datatable">
...
</p-datatable>
...
const datatable = useTemplateRef('datatable');
...
let filteredCustomers = datatable.value.processedData
let str = "Customers in filter: "
str += filteredCustomers.map(customer => customer.fullname).join();
alert(str)
You can run the cells in a markdown section from the "outline" (that you can call from the command Jupyter: Show Table Of Contents (Outline View)
) :
Adjust your code as follows:
txtFileName = Application.GetSaveAsFilename(ThisWorkbook.FullName, "Excel Macro-Enabled Workbook (*.xlsm), *.xlsm,PDF File (*.pdf),*.pdf", , "Save As XLSM or PDF file")
You can easily just use the tailwindcss selector.
<NavLink className="[&.active]:bg-slate-300"/>Home</NavLink>
<NavLink className="[&.active]:bg-slate-300"/>About</NavLink>
I had this issue due to symlinks: https://github.com/typescript-eslint/typescript-eslint/issues/2987
Opening the project directory via the true path rather than the symlink solved the problem for me.
# Compare these two outputs, if they are different than you are in a symlinked directory
pwd -P # Shows the physical path (real path after resolving symlinks)
pwd -L # Shows the logical path (path with symlinks)`
# Navigate to the non symlinked directory
cd $(pwd -P)
None of the above work for me, if there is any other solution that could help!
I am running CUDA 12.1 on A100's, torch=2.2.2+cu12.1
Below is the code line and Error I get.
python -c "import torch; print(torch.cuda.get_device_properties(0))"
the error:
Traceback (most recent call last): File "", line 1, in File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/init.py", line 28, in from ._utils_internal import get_file_path, prepare_multiprocessing_environment,
File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/_utils_internal.py", line 4, in import tempfile File "/home/pgouripe/.conda/envs/py39/lib/python3.9/tempfile.py", line 45, in from random import Random as _Random File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/random.py", line 4, in from .. import Tensor ImportError: attempted relative import with no known parent package (py39) [pgouripe@sg048:~/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda]$ cd (py39) [pgouripe@sg048:~]$ python -c "import torch; print(torch.cuda.get_device_properties(0))" Traceback (most recent call last): File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 315, in _lazy_init queued_call() File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 183, in _check_capability capability = get_device_capability(d) File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 439, in get_device_capability prop = get_device_properties(device) File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 457, in get_device_properties return _get_device_properties(device) # type: ignore[name-defined] RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "", line 1, in File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 453, in get_device_properties _lazy_init() # will define _get_device_properties File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 321, in _lazy_init raise DeferredCudaCallError(msg) from e torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
CUDA call was originally invoked at:
File "", line 1, in File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/init.py", line 1427, in _C._initExtension(manager_path()) File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 247, in _lazy_call(_check_capability) File "/home/pgouripe/.conda/envs/py39/lib/python3.9/site-packages/torch/cuda/init.py", line 244, in _lazy_call _queued_calls.append((callable, traceback.format_stack()))
Any input will be helpful! Thanks
The link in the accepted answer doesn't bring you to the expected location in the docs anymore, try this:
https://hexdocs.pm/phoenix_live_view/Phoenix.Component.html#sigil_H/2-special-attributes
Try to make sure that the map is not being rendered twice. Add a constant key to your MapView
Component to make sure there is only one instance of the map:
<MapView
key={"map-instance"}
...
/>
Added CFLAGS="-O2 -g0" before my pyenv install 3.10 before it worked in my WSL Ubuntu environment:
CFLAGS="-O2 -g0" pyenv install 3.10
As of v21.0 I can concur that adding "business_management" is necessary.
Additionally the following tool is very useful in detecting what the token has access too so you aren't lost wondering if there was an oAuth issue.
https://developers.facebook.com/tools/debug/accesstoken/
Igy posted a link to it in a comment above that looks like it used to have this tool associated but its been moved to above.
Feature | WinForms (C# or C++/CLI) | MFC (C++) |
---|---|---|
Framework | .NET Framework / .NET Core | Windows API (native) |
Language | Managed C# or C++/CLI | Native C++ |
Development Speed | Faster (RAD) | Slower (manual coding) |
Ease of Use | Easy (drag-and-drop UI) | Complex (manual UI code) |
UI Features | Modern controls and styling | Limited styling options |
Performance | Good (managed code) | High (native code) |
Portability | Windows, some cross-platform | Windows only |
Use Case | Business apps, tools | System-level apps, legacy apps |
Choose WinForms for rapid, modern app development or .NET integration. Use MFC for performance-critical, native Windows applications or legacy projects.
First you should check your User Pool User and see if the attributes you want as claims exist on the User Pool entry. You may have only created a User Pool User with Sub and Email (No additional attributes). Then check the claims on your Cognito issued ID Token, it should contain the attributes for your User. You can check it in your application code after authenticating. You can enable detailed metrics in API Gateway to give you more logs and check them on CloudWatch. You can try setting up your authorizer to check any claims on the AccessToken (Not the ID token) as requests come through. (Authorization == checking Access Token claims)
const el = await page.waitForSelector("::p-xpath(///div[@role='button' and text()='Next'])");
await el.click();
you can do this for selecting element with xpath for puppeteer latest version
Reference : https://pptr.dev/guides/page-interactions/#xpath-selectors--p-xpath
Turns out there is a parameter called maxX
and maxY
in the BarChartData
widget. As for the label..... the font colour was the same color as the background color (facepalm).
I have the same problem. Any update here?
Interesting. IMHO, the over-use of 'object' is inane. That issue is likely related to caching/architecture and is basically an 'error/optimization of no consequence'--a good test is if you change z1, does z2 get changed? Probably not, if it does, that is a bug. For the int vs real, again, a "bug" of no consequence--probably an optimization--how does this break code? Please post that part; make that your favorite. :-) (here come the flames & down votes... bite me)
(1) Modify the linker script 'STM32G030C8Tx_FLASH.ld' and add the EE emulation area
MEMORY
{
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 8K
FLASH (rx) : ORIGIN = 0x8000000, LENGTH = 62K
EMULATED_EEPROM (xrw) : ORIGIN = 0x0800F800, LENGTH = 2K
}
/* Define output sections */
SECTIONS
{
/* The startup code into "FLASH" Rom type memory */
.ourData :
{
. = ALIGN(4);
*(.ourData)
. = ALIGN(4);
} >EMULATED_EEPROM
.
.
.
(2) Change linker script name (to e.g. STM32G030C8Tx_EE_FLASH.ld) for MXCube not to erase it when updating code
(3) Modify linker script path in 'CMakeLists.txt' like this
# Set linker script
set(linker_script_SRC ${PROJ_PATH}/STM32G030C8Tx_EE_FLASH.ld)
set(EXECUTABLE ${CMAKE_PROJECT_NAME})
I've been reading out the memory areas and it seems to work fine.
If there is anything wrong or to improve, please give some feedback.
In my case, I had to stash the changes on my local branch, then merge with the remote branch, then when I applied my stashed changes back, the merge conflict window opened.
Hope this helps!
For me, the problem was my JAVA_HOME environment variable was pointed to Java 21. As soon as I changed my JAVA_HOME to point to Java 11, all compiled/linked fine.
Try to download both 'punkt' and 'punkt_tab'
import nltk
nltk.download('punkt_tab')
nltk.download('punkt')
This code automatically adjust position to left/right based on available space
//states
const [position, setPosition] = useState<string>('[&_div.absolute]:right-auto [&_div.absolute]:left-0');
// logic
useEffect(() => {
if (menuRef.current) {
const menuRect = menuRef.current.getBoundingClientRect();
const spaceOnLeft = menuRect.left;
const spaceOnRight = window.innerWidth - menuRect.right;
// Set position based on available space
if (spaceOnLeft > spaceOnRight) {
setPosition('[&_div.absolute]:left-auto [&_div.absolute]:right-0');
} else {
setPosition('[&_div.absolute]:right-auto [&_div.absolute]:left-0');
}
}
}, [menu]);
// ui
<NavigationMenu className={position}>...</NavigationMenu />
note: i realise this doesn't answer the specific question, but it may help if your problem is "how do i update my vuetify2 app to vue3"
have a look at https://github.com/vuetifyjs/eslint-plugin-vuetify#readme
you can't use vuetify2 with vue3, but you can upgrade to vuetify3 and use this plugin to reduce the migration headache.
Windows 10. When using idle I got this error message. Turns out idle does not have write permission in the folder. Solution, run idle as administrator or give idle write permission. In windows security, go to Virus& threat protection, manage Ransomware protection, Allow an app through Controlled folder access, + Add an allowed app
I needed a 6 digit integer with no zeroes so I did this:
Enum.map(0..5, fn _ -> Enum.random(1..9) end) |> Integer.undigits
I wouldn't use this for anything large, but I consider it acceptable for a few digits and infrequent use.
You need to dereference it with '*', use '{'/'}' vs. '['/']', and then you will be printing the ascii value of 'x' = 121. Here is the code: #include <stdio.h> int main() { char name[4] = {'x', '%', 'Q', 0}; printf("%d\n", *name); }
This message still comes up occassionally in VS 17.12.3 (some things never get fixed apparently). Turns out it writes that csuser file badly and a quick fix is to change the target to another platform and back again. I find going between a real iOS device and an iOS simulator causes the issue. Changing it to Windows machine in between fixes it (no need to actually compile and run, just the intermediate step seems to work).
I ran into this issue as well, for me the solution ended up being that I forgot to include a (ns)
command at the top of the code.
simple: vaf (visual around function) vac (visual around class) zero plugins
Thank you so much Dauros youre the absolute best. Can confirm it works for me too. Also using vite version 6.0.6
I wouldn't do any of this. Use a Power Query (Data tab) to scan the sharepoint folder. Find the workbook based on some logic. Load Query to sheet. Now you have the name of the workbook which can be dynamically used in formulae.
I think that's for Code Signing, and means that the certificates used for that purpose must be of the RSA type. No Elliptic Curve. It seems that some Windows stuff doesn't fully support ECC certs. I readed that people using ECC certs for Code Signing were still getting the infamous SmartScreen warning.
It seems to work with the hack:
n.l
%option noyywrap nounput noinput batch debug
%x l8 retnl
%{
#include "parse.h"
%}
id [a-zA-Z][a-zA-Z_0-9]*
int [0-9]+
blank [ \t\r]
%%
<INITIAL>.|\n {BEGIN l8; yyless(0); }
<retnl>[\n] {return '\n';}
<l8>[\n] { }
<l8,retnl>[ \t] { }
<l8,retnl>[#][^\n]* { }
<l8,retnl>fun { return FUNC; }
<l8,retnl>"{" {return '{';}
<l8,retnl>"}" {return '}';}
<l8,retnl>"(" {return '(';}
<l8,retnl>")" {return ')';}
<l8,retnl>"+" {return '+';}
<l8,retnl>";" {return ';';}
<l8,retnl>{id} {return ID; }
<l8,retnl>{int} {return NUM; }
%%
I define 2 states l8
and retnl
. l8
will swollow '\n'
and retnl
will return '\n'
.
Now in the grammar I do: n.y:
%{
#define YYDEBUG 1
%}
%code requires {
extern int yy_start;
#define retnl 2
extern enum yytokentype yylex();
extern void yyerror(const char* errmsg);
extern void yyerrorf(const char* format, ...);
}
%expect 0
// %define api.pure
// %locations
%define parse.trace
%verbose
%header
%define parse.error verbose
%token FUNC
%token ID NUM
%left '+'
%%
%start unit;
unit: stmts
stmts:
stmt {}
| stmts stmt {}
stmt:
expr D { yy_start = 1 + 2 * 1 /*state:l8*/; }
;
D: ';'
| '\n'
;
expr: expr '+' expr {}
| primary {}
;
primary:
NUM {}
| ID {}
| FUNC '(' ')' '{' stmts { yy_start = 1 + 2 * 2 /*state:retnl*/; } '}' { }
| FUNC ID '(' ')' '{' stmts { yy_start = 1 + 2 * 2 /*state:retnl*/; } '}' {}
;
%%
void yyerror(const char* errmsg)
{
printf("%s",errmsg);
}
To be able to access yy_start
i need to add
sed -i -e 's/static int yy_start/int yy_start/' scan.c
in Makefile:
all:
bison -rall -o parse.c n.y
flex -o scan.c n.l
sed -i -e 's/static int yy_start/int yy_start/' scan.c
gcc -g -c parse.c -o parse.o
gcc -g -c scan.c -o scan.o
gcc -g -c n.c -o n.o
gcc -g scan.o parse.o n.o -lc -o n.exe
./n.exe test.txt
The 2 lines
yy_start = 1 + 2 * 1 /*state:l8*/
and yy_start = 1 + 2 * 2 /*state:retnl*/
come from the definition of BEGIN(l8)
and BEGIN(retnl)
if they would be used inside the flex grammar.
Does anybody know a more standard way of achieving this?
For new and old readers to this question I strongly recommend that since Java 8 you use java.time, the modern Java date and time API, for your date work. The classes Date
, SimpleDateFormat
, GregorianCalendar
and Calendar
hat you were trying to use were troublesome and are fortunately long outdated. So nowadays avoid them.
So it’s about time this question gets answers that demonstrate the use of java.time. There is a good one by Basil Bourque. And here’s my shot.
I know that the moderators and some users don’t like reservations and disclaimers like this section and say I should instead ask questions in comments. I’m not sure it works with a 15 years old question that nevertheless still has readers. So I understand from your question that you want a method that does two things:
Date
.I assume:
2/29
since we don’t know whether it is in a leap year or not. You want to forbid February 30 and April 31.Using the comment by @Anonymous under the answer by Basil Bourque:
private static final DateTimeFormatter parser
= DateTimeFormatter.ofPattern("M/d", Locale.ROOT);
/** @throws DateTimeParseException If the string is not valid */
public static MonthDay parseMonthDay(String inString) {
return MonthDay.parse(inString, parser);
}
Trying it out:
System.out.println(parseMonthDay("2/29"));
Output:
--02-29
The method rejects for example 2/30
, 0/30
, 1/32
and 1/31 and some nonsense
. Funnily it accepts 001/031
.
Date
objectAs I said, you should not use Date
. Unless you indispensably need a Date
for a legacy API that you cannot upgrade to java.time just now, that is. But! You basically cannot convert your string to a Date
. A Date
is a point in time and despite the name cannot represent a date, not to mention a day of month without a year. What the troublesome old SimpleDateFormat
would do would be take the first moment of the date in its default year of 1970 in the default time zone of the JVM. Since 1970 was not a leap year this implies that 2/29
and 3/1
were both parsed into Sun Mar 01 00:00:00 (your time zone) 1970
, that is, you cannot distinguish.
So unless you have specific requirements that I cannot guess, I recommend that you stay with the MonthDay
object returned from my method above.
Forgive the repetition, you were using the troublesome old and error-prone classes. That typically leads to buggy code.
Your method needs both a return type and a method name, for example:
public Date paresMonthDay(String inString) throws ParseException {
When using SimpleDateFormat.parse()
you also need to declare throws ParseException
as shown unless you catch that exception in the method.
Since your method doesn’t use anything from the surrounding object, I recommend you declare it static
.
When the method parameter is declared as inString
, you need to use that name in the method body (you cannot refer to is as just inStr
).
As others have said you should use the built in library to parse the string, not parse it by hand. In particular converting it from M/d
to MM/dd
format seems just a waste.
As I said, you are parsing 2/29
and 3/1
into the same Date
.
There is no connection between your parsed date and your GregorianCalendar cal
. The latter has today’s date in the default time zone, so your are effectively checking whether the parsed month is after the current month and issuing your error message if it is.
You are not checking for negative numbers or 0 in the input. In my time zone your method just parsed 0/-1
into Sun Nov 29 00:00:00 CET 1970
and did not issue any error message.
The right bracket )
of your if
statement is inside a comment, so the compiler doesn’t see it.
In System.out.println
, println
must be with a lower case p
. In the same statement there is a double quote "
too many after month
.
If your method is to return a Date
, you must include a return
statement.
Oracle tutorial: Trail: Date Time explaining how to use java.time.
Check out the capabilities of the AntiForgery tokens available in MVC.
You should be able to use the IAntiForgeryAdditionalDataProvider to tie some specific detail(s) in the anti forgery cookie to details in your auth cookie (maybe the Description property?). Then, you can handle the validation failure by clearing all auth data and redirecting to login like you would with any other auth timeout.
import all your models in your alembic (env.py) file.
Changing the repo to use testing can have some unintended impacts. For me adding the musl package worked:
apk upgrade --available musl
../gradlew -q dependencies
or
gradlew -q dependencies
depending on where you are at in your project structure. I use Intellij and it only seems to work in the Command Prompt in the local terminal
The feature is still missing within the Monitoring API.
@user.update(...) also works in rails 6 and it fully replaces @user.update_attributes(...) from rails 5 code. You don't need to convert it to a class instance aka User.update call where you can get later issues when updating nested models (while @user.update also works on nested models just like the deprecated method update_attributes did in rails 5).
For me, the issue was caused by activating multiple conda environments. And deactivating the first one will switch the ipython to the second environment version which solved the issue.
FYI as of 2024.12.30 web_usb 0.2.0 example does not compile. install instructions way too simple and no troubleshooting provided. expect to see: "Error: Type 'EventListener' not found" Target of URI doesn't exist: 'ledger_nano_s_page. dart'. etc..
I would suggest 2 options:
Get the values from the table and construct an array of values where you can map later as in an Angular project. Check the links: getElementByTagName, get table cells
Rconstruct the HTML element
Delete the useless data
<td style="border: 1px solid black; border-collapse: collapse;">----</td>
Detect the row for the age (number) and use it as input. That would be a super job to be done and not worth it. Assuming you have a normal table not with a single person.
Extra suggestion. Ask for a better API.
I was struggling with the same issue, there are plenty of online tools that will ''sort by length'' simple single cell/column of data...
But nothing that can process multiple rows of data, in CSV, TXT - comma-separated, or tab-delimited Excel/Sheet files that I have, so that ALL data is sorted according to specific ''ROW'' of data that I want to sort by length.
This online browser-based tool- https://chathelp.ai/tools/sort-csv-txt-column-by-character-length/
Was able to get the job done, it executed in the browser but still worked for my multi-row 40mb+ CSV files, as well as for single-line text files.
Does java 21 require drools 10?
2025ammarnakhostin2008
Gnueidjfjgjgngnnggngngnfnf Fbdbfbfjfbfbf Ffnffjuc9w919rydhwuwieif Fjffhdur8w9dubbxzjxxxssudud Sjdufufururdffguh8h9f9wuwhd Dudkeqhuetigijfhfsg Jcbsnwhdjakahfucwm w
You can directly Download it from here. https://web.archive.org/web/20220331130319/https://yann.lecun.com/exdb/mnist/
Can you please unblock my
whatsapp how to fix this
You can use the ryuk.container.image
property to instruct testcontainer to use a ryuk image from your internal docker image registry. See more details on this documentation page.
Make sure to put the commands at very top of the paogram as follows:
#define BLYNK_TEMPLATE_ID "xxxxxxx"
#define BLYNK_TEMPLATE_NAME "xxxxxxx"
and before the commands of #include
#include <WiFi.h>
#include <Blynk.h>
#include <BlynkSimpleEsp32.h>
to be like this order
#define BLYNK_TEMPLATE_ID "xxxxxxx"
#define BLYNK_TEMPLATE_NAME "xxxxxxx"
#include <WiFi.h>
#include <Blynk.h>
#include <BlynkSimpleEsp32.h>
and everything will be fine : ) : )
i've tried a litte bit.
My findings can be broken down into the following:
It's true JS can is not able to understand (?i) it can only be used globally. Therefore:
console.log(/(?i:[A-Z]{3})/.test("abcdefghijkl"));
// should return an error
// or false if (?i) is ignored
console.log(/(?i:[A-Z]{4})/.test("abcdefghijkl")); // same here
If {x}
is between 0 and 3 it returns true
otherwise it returns false
this is true for firefox nightly, firefox, brave, opera, chrome and edge.
The reason why I believe this is a bug:
After using safari it throws an error
Invalid regular expression: unrecognized character after (?
Therefore I believe you are right this is a bug in chromium and firefox.
I hope this helps and if anything is unclear or incorrect please let me know! :)
If you are using phyton 3 the below should work
sudo -H pip3 install -U pipenv
:3: error: unclosed character literal PUBG (PlayerUnknown's Battlegrounds) ^ :3: error: not a statement PUBG (PlayerUnknown's Battlegrounds) ^ :3: error: ';' expected PUBG (PlayerUnknown's Battlegrounds) ^ :5: error: expected import javax.swing.; ^ :5: error: illegal start of expression import javax.swing.; ^ :6: error: illegal start of expression import java.awt.; ^ :6: error: expected import java.awt.; ^ :6: error: illegal start of expression import java.awt.*; ^ :7: error: illegal start of expression import java.awt.event.KeyAdapter; ^ :7: error: not a statement import java.awt.event.KeyAdapter; ^ :8: error: illegal start of expression import java.awt.event.KeyEvent; ^ :8: error: not a statement import java.awt.event.KeyEvent; ^ :9: error: illegal start of expression import java.util.ArrayList; ^ :9: error: not a statement import java.util.ArrayList; ^ :10: error: illegal start of expression import java.util.Random; ^ :10: error: not a statement import java.util.Random; ^ :12: error: illegal start of expression public class BattleRoyaleGame extends JPanel { ^ :51: error: : expected case KeyEvent.VK_UP -> playerY -= playerSpeed; ^ :51: error: illegal start of expression case KeyEvent.VK_UP -> playerY -= playerSpeed; ^ :52: error: : expected case KeyEvent.VK_DOWN -> playerY += playerSpeed; ^ :52: error: illegal start of expression case KeyEvent.VK_DOWN -> playerY += playerSpeed; ^ :53: error: : expected case KeyEvent.VK_LEFT -> playerX -= playerSpeed; ^ :53: error: illegal start of expression case KeyEvent.VK_LEFT -> playerX -= playerSpeed; ^ :54: error: : expected case KeyEvent.VK_RIGHT -> playerX += playerSpeed; ^ :54: error: illegal start of expression case KeyEvent.VK_RIGHT -> playerX += playerSpeed; ^ :119: error: reached end of file while parsing } ^ 26 errors
1) M365 Groups
2) Microsoft Teams
3) SharePoint and Permissions
4) OneDrive Permission and Data
5) Emails and Outlook
6) Workflow/Job Frequency
Additional Notes:
- Documentation: Maintain clear documentation of when and why user accounts are disabled.
- Communication: Inform users about the impact of disabling their accounts, especially regarding access to Private Teams.
- Re-enabling Process: Establish a streamlined process for re-enabling accounts and re-adding users to necessary groups and teams.
- Regular Reviews: Conduct regular reviews of disabled accounts to ensure they are re-enabled or deleted as appropriate.
The first code block is incorrect. Match patterns like that only work when all the parameters to be matched on come after the colon. For example, the code block is equivalent to
def length (α : Type) : List α -> Nat
| List.nil => Nat.zero
| List.cons y ys => Nat.succ (length α ys)
Second @null's answer. The key is the Python extension. Deleting ~/.vscode-server
works only if you don't install the Python extension.
first cd into the project that you are currently working in ( type cd and the project name) then in terminal type "flutter pub add intl"
Another common workaround if available is to use git clone
with Http(s) URIs instead of a SSH connexion, which is often possible with GitHub/Gitlab repos. You may ask to or configure your git server to be able to do so.
I don't know why the sender.currentTitle returns nil but you can upwrap it (better to not force an unwapring) and print the result
I encountered the same error, try the steps below.
Add the following annotation in your application class @EnableR2dbcRepositories
.
Extend R2dbcRepository
class instead of CoroutineCrudRepository
class. Make sure your import is org.springframework.data.r2dbc.repository.R2dbcRepository
The following code allows you to print both, host and URI to the console:
Serial.print("Request: ");
Serial.print(webServer.hostHeader());
Serial.println(webServer.uri());
You probably need to add:
android:exported="true"
to the declaration of the receiver in your AndroidManifest.xml.
I think you will have no problem
useParams
is not the react hook, it's officially provided by react-router-dom
So technically,it will process the slug with your component and provide it before that route code is loaded
you can do simple validation on backend or frontend
if(!slug) return
Not sure what database or SQL flavor you're using but array_contains works for querying an array of strings:
SELECT *
FROM table
WHERE ARRAY_CONTAINS(column_name, "string") = True;
Doing it this way generate the actual rows of data, not just a Boolean as shown in the documentation.
https://docs.data.world/documentation/sql/reference/functions/array_contains.html
This just could happend if u recently re-named a anything through entire solution could affect csproj file.
To answer this - Just copy the path of the folder that you are looking and write cd (path of the folder). It will work fine.
Hope it works
There are two things in play here. First, as Eric pointed out, you're passing the address of name, not the contents (although you would use %p instead of %d to print that properly). But even if you cast and dereferenced the argument so as to interpret the contents of that array as an integral type (which might run into alignment issues: ints aren't necessarily allowed to start at any point in memory. They generally need to start at a word boundary), it wouldn't necessarily print the same as the other:
Some CPUs store the bytes of a multibyte object out of order. This is called the endianness of the representation. Intel chips are little-endian, where the least significant byte is stored at the front (as are most other modern systems. I think Apple used to use big-endian chips, but I'm not sure). So (for a 32-bit unsigned int), 0x00112233 would actually be stored as { 0x33, 0x22, 0x11, 0x00 }.
With Edge it seems that persistent indexeddb is not possible. In the developer tools, Application/IndexedDB is shown as not persisted, and I don't see how this can be changed. On Chrome, open chrome://settings/content/siteData and either allow all web sites or just yours. And you need a secure context, as so often.
In my case, I put a form inside the DropdownMenu.Item
of Radix Primitives' Dropdown Menu component, I was able to solve this by applying asChild={true}
to it.
<DropdownMenuItem asChild={true}>
<form>
...
</form>
</DropdownMenuItem>
Thanks @Paulw11, now I understand. Execution just continues after the call to the Swift code, you need to just let the completion block do it's thing, and handle things from there.
The Objective-C code now is:
- (void) ReceiptValidatedWith:(int)validation_code
{
if (validation_code != 1) {
ALog(@"AppStore receipt invalid, exiting.");
[NSApp terminate:self];
}
else {
ALog(@"AppStore receipt VALID, continuing.");
}
}
- (void) validateReceipt
// Validating receipt now only possible via Swift, call method AppTransaction.shared in swift module.
// As that all is async code we need to jump through some hoops to get at the value of the validity code.
{
ALog(@"validating receipt...");
[self.receiptValidation getValidityOfAppStoreReceiptWithCompletionHandler:^(NSInteger validation_code, NSError *error) {
[self ReceiptValidatedWith:validation_code]; // called on completion of the Swift code
}];
}
Just for the heck of it the code to interface to Swift:
.h
// AppStore receipt validation module (Swift)
@property (nonatomic,strong) ReceiptValidation *receiptValidation; // swift
.m
@synthesize receiptValidation; // swift module
// AppStore receipt validation with swift module
self.receiptValidation = [[ReceiptValidation alloc] init];
If you want to get only the numbers from the text, you can easily use:
"(\d+)"
When try it on regex101 platform, the results are:
also, you can use Regex:
"\}?\{?(\d+)\\?\}?"
to skip all special & un-wanted characters, and the results should be as this:
I already had the same error and the problem comes from the Firefox browser. With Chrome there is no this error
I do not have the reputation to comment on any of the answers. But as of 2024, none of these methods work.
Windows recognizes that the ssh.bat
file is not ssh.exe
, with or without the echo (at least that is what I conclude from
The specified path "C:\Users\<user>\ssh.bat" is not a valid SSH binary
Checking ssh with ...
It works effortlessly with plink. I can, in the terminal, say plink user@hostname
while I have a ticket in MIT Kerberos, and I can get in. But it just can't with the methods used for VSCode. It would be so nice if we could have the ability to, instead of ssh-ing into the remote through a native Windows SSH client, use Plink and get the same functionality as the Remote SSH extension gives us.
But I guess I'll be programming using the terminal now.
I would love to be told by someone that there is still a way out.
did you find a solution to this problem ?
Regret but this is completely wrong application of pointers. Just, separate two functions - write and read in two different source files and see for yourself.
It is possible to do so, but certainly not optimal or recommended considering your use case. It is not possible to predict the char range before AES encryption, so the main idea is that you could try to encrypt the message with a different key and/or a different IV instead, and do so until you detect programmatically that the ciphertext contains no special char before sending it over the network. Your service will then suffer another problem considering the security of key sharing and decryption unless you choose to work only with a new IV. Using Base64 encoding like TheNytangel answer will use less bandwidth over the network while tackle the initial problem more effectively.