For those interested in this - I found a solution at Eliminating Extra Spaces in String Fields When Exporting Data to a Text File on Pentaho
You forgot about "use client"
in component.
How if I use a nginx reverse proxy with a path prefix, but the backend service is provided by IP and port. How should the client connect to the websocket through this proxy? If it does not pass a path variable, the nginx server will disallow it.
Here is an example of cropping SVG throgh viewBox
attribute.
def verify_bbox(bbox):
if not isinstance(bbox, (tuple, list)):
raise TypeError(f"Bounding box should be a tuple or list, got {type(bbox)}")
if len(bbox) != 4:
raise ValueError(f"Bounding box should have 4 values [left, top, right, bottom], got {len(bbox)}")
for b in bbox:
if not isinstance(b,(int,float)) or b < 0 or b > 1:
raise ValueError(f"Bounding box values should be between 0 and 1, got {b}")
def crop_svg(svg, bbox):
verify_bbox(bbox) # left, top, right, bottom in 0-1 range
def crop_viewbox(m):
vb, *_ = m.groups()
x,y,w,h = [int(v) for v in vb.split()]
X, Y = bbox[0]*w + x, bbox[1]*h + y # offset by original viewbox
W, H = (bbox[2] - bbox[0])*w, (bbox[3] - bbox[1])*h
return m.group().replace(vb, f'{X} {Y} {W} {H}') # Replace viewbox with new values
return re.sub(r'viewBox\=[\"\'](.*?)[\"\']', crop_viewbox, svg ,1)
svg = '''<svg xmlns="http://www.w3.org/2000/svg" height="50px" viewBox="0 0 25 25" fill="red" stroke-width="2" transform="rotate(45)">\n <rect x="9" y="0" width="8" height="6"/>\n <rect x="9" y="7" width="1" height="10"/>\n <rect x="12" y="7" width="2" height="10"/>\n <rect x="16" y="7" width="1" height="10"/>\n <polygon points="9 18,17 18,13 25,9 18"></polygon>\n</svg>'''
crop_svg(svg, [0.5,0,1,1])
'<svg xmlns="http://www.w3.org/2000/svg" height="50px" viewBox="12.5 0 12.5 25" fill="red" stroke-width="2" transform="rotate(45)">\n <rect x="9" y="0" width="8" height="6"/>\n <rect x="9" y="7" width="1" height="10"/>\n <rect x="12" y="7" width="2" height="10"/>\n <rect x="16" y="7" width="1" height="10"/>\n <polygon points="9 18,17 18,13 25,9 18"></polygon>\n</svg>'
There are two things to be noted:
height
or width
should be 'auto'
or missing to view correctly or both reset accordingly. You can add following line:re.sub(r'height\=[\"\'](.*?)[\"\']', 'height="auto"', svg,1)
I was shown favor by the gods and rebooting my machine fixed the issue.
tengo un formulario dentro de un modal con dos input radio.
en un jquery cargo con ajax los valores de una tabla y según el valor cargado activo un input radio o el otro.
if (data.rsi == 1) { $("#txtSi").attr('checked', true); } else { $("#txtNo").attr('checked', false); }
if (data.rno == 1) { $("#txtNo").attr('checked', true); } else { $("#txtSi").attr('checked', false); }
no me funciona, como hago, se dice que debo identificar el modal del formulario pero como se hace.
One that just bit me migrating an old codebase....
$x=$obj->$classes[0]; parses as intended in 5.6--getting a class name from $classes[0]--but 7.4 tries to use just $classes and then errors out on the [0] literal. So....
$x=$obj->{$classes[0]}; curly braces save the day to make 7.4 happy.
Looking at the code in your github.com link, @dnadlinger, I can't quite tell whether your 64-bit Windows code lacks the solution you describe for 32-bit Windows, or whether in fact the corresponding solution for 64-bit Windows is simpler.
Assuming the latter, I've tried to adapt the Boost.Context 64-bit Windows fcontext masm implementation to save and restore the three-pointer block at GS:[0]
on every context switch, which is what I infer the D language's 64-bit Windows fiber context switch code is doing.
Unfortunately, even with the linked changes, the test programs I'm using to exercise context switching during exception handling (linked in the PR) behave the same as with unmodified Boost.Context.
What am I missing?
I met the topic problem, i fount the diff is "devServer." add "headers" config, uncomment the "devServer.headers",it's working
devServer: {
proxy: {},
# headers: {}
}
This was driving me crazy on Arc, so I spent some time digging into and resetting every setting imaginable. Turns out, I had graphics acceleration disabled in Chrome://settings/system which borked Dartpad. Posting here in case anyone else comes across this seemingly lone post about this issue - hopefully this helps! I'll be cross-posting in the github issue shortly.
Please see https://github.com/marketplace/actions/ftp-deploy for an action that will let you deploy via FTP.
I have installed "Notification" by Bracketspace and disabled the Wordpress default comment options in Settings>Discussion. I had to work up my own email format using the plugin's editor to replicate the default comment notification, but with a different subject line. This works.
I've been trying to test a local website that works fine on my Linux laptop but on iPad does not recognise external stylesheets. Tried with Firefox as well as Safari on friend's iPad - neither browser on iPad recognises external stylesheets. So I conclude this is an iPad/ios issue rather than Safari alone. A pretty abysmal show from Apple if you ask me given external stylesheets have been around for 10 years plus.
Yeah, the application appeared registered under "delegated permissions," whereas creating a Sharepoint list subscription requires an "application subscription."
https://learn.microsoft.com/en-us/graph/api/subscription-post-subscriptions?view=graph-rest-1.0&tabs=http#permissions
If xlwings isn't a strict requirement for your use case, I recommend checking out xloil. It will allow you to write UDFs in Python, to return Python objects in Excel cells, and then to reuse those returned objects (again, as Python objects) in subsequent calculations, including as arguments in formulas in other cells.
what would be the case where one would need git merge, with or without ff flag, sorry am quite new to git, and rebase seems like the best option
my use case is that in an app that I am working on, an other dev sent a PR to the main branch and I did a commit before pulling.
@A. Berk Your answer is just perfect
So this is a common configuration required for setting up Lombok. If using Intellij, you need to do the following:
Once this is done, errors should go away.
Adjust n_iter: The total number of hyperparameter combinations is 3×3×3×3=813×3×3×3=81. Since this is smaller than n_iter=500, reduce n_iter to 81 or less.
Fix booster Parameter: If you are explicitly setting a booster, ensure it is one of the valid types ('gbtree', 'gblinear', or 'dart'). Disable Deprecated Label Encoder: Pass use_label_encoder=False to XGBClassifier to suppress the warning. Validate Parameter Names: Double-check that all hyperparameter names are valid for XGBClassifier.
Use the following code to switch on pointer types:
d := &data{}
switch p := ptr.(type) {
case *float64:
// We handle this in two steps because int(*p)
// is not addressable.
// (1) Convert to int and assign to variable.
i := int(*p)
// (2) Assign address of variable to field.
d.timeout = &i
case *int:
d.timeout = p
case nil:
// Clear field when p == nil.
d.timeout = nil
default:
panic(fmt.Sprintf("unexpected type %T", p))
}
Still not working for me. After I followed all instructions and tutorials.
if you are using spring boot V3.4.0, JDK17 version and above, update the below dependency version from 2.5.0 to 2.7.0, which is working my case
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>2.7.0</version>
Regarding Pierz's answer - it's better to add '-c copy' parameter to ffmpeg command - then the download will be faster (as there will be no transcoding):
ffmpeg -i http://dash.edgesuite.net/dash264/TestCases/1a/netflix/exMPD_BIP_TC1.mpd -c copy exMPD_BIP_TC1.mp4
Follow the video to write the same test cases and you should be able to get 9/10. Same as Milos mentioned and add following test case you should be able to solve the problem and get the last point.
public void test_is_NOT_triangle_8() {
assertFalse(Demo.isTriangle(1,2,3));
}
Did you get this working for topics as well?
Like the @Blckknght say, the "close" is a method, not a property, so, you missed the parentheses "()" when call "close".
But, we have some better way to do this task, with automatic close file when is not more necessary.
The fixed script python that you show to us:
import shutil
import os
f_new = "file.new"
f_old = "file.old"
content = "hello"
fn = open(f_new, "w")
fn.write(content)
fn.close()
print("1:", content)
#os.replace(f_new, f_old)
shutil.move(f_new, f_old)
fo = open(f_old, "r")
cont_old = fo.read()
print("2:", cont_old)
fo.close()
And here, are the improved script that make the same task:
import shutil
import os
new_file = "new_file.txt"
old_file = "old_file.txt"
content = "hello"
with open(new_file, "w") as file:
file.write(content)
print("1:", content)
#os.replace(f_new, f_old)
shutil.move(new_file, old_file)
with open(old_file, "r") as file:
old_content = file.read()
print("2:", old_content)
You can see more about the keyword "with" on this documentation.
Basically, the "with" is an automatic clean-up that can be defined in which helps to clean up some "trash" or "manual" data/variable.
Unable to install thinc on Mac M3. Any input experts?
Using cached spacy-3.8.2.tar.gz (1.3 MB)
...
Building wheels for collected packages: thinc
Building wheel for thinc (pyproject.toml): started
Building wheel for thinc (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Building wheel for thinc (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [383 lines of output]
Cythonizing sources
running bdist_wheel
running build
running build_py
creating build/lib.macosx-10.13-universal2-cpython-313/thinc
=Versions/3.13/include/python3.13/Python.h won't be automatically included in the manifest: the path must be relative
dependency /private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayscalars.h won't be automatically included in the manifest: the path must be relative
dependency /private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarraytypes.h won't be automatically included in the manifest: the path must be relative
dependency /private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ufuncobject.h won't be automatically included in the manifest: the path must be relative
reading manifest file 'thinc.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'tmp'
adding license file 'LICENSE'
writing manifest file 'thinc.egg-info/SOURCES.txt'
/private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/setuptools/command/build_py.py:212: _Warning: Package 'thinc.tests.mypy.configs' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'thinc.tests.mypy.configs' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'thinc.tests.mypy.configs' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'thinc.tests.mypy.configs' to be distributed and are
already explicitly excluding 'thinc.tests.mypy.configs' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
/private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/setuptools/command/build_py.py:212: _Warning: Package 'thinc.tests.mypy.outputs' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'thinc.tests.mypy.outputs' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'thinc.tests.mypy.outputs' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'thinc.tests.mypy.outputs' to be distributed and are
already explicitly excluding 'thinc.tests.mypy.outputs' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
copying thinc/__init__.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc
copying thinc/py.typed -> build/lib.macosx-10.13-universal2-cpython-313/thinc
copying thinc/layers/premap_ids.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/layers
copying thinc/layers/sparselinear.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/layers
copying thinc/backends/__init__.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/_custom_kernels.cu -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/_murmur3.cu -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/cblas.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/cblas.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/cpu_kernels.hh -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/linalg.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/linalg.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/numpy_ops.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/backends/numpy_ops.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/backends
copying thinc/extra/__init__.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc/extra
copying thinc/extra/search.pxd -> build/lib.macosx-10.13-universal2-cpython-313/thinc/extra
copying thinc/extra/search.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/extra
creating build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/configs
copying thinc/tests/mypy/configs/mypy-default.ini -> build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/configs
copying thinc/tests/mypy/configs/mypy-plugin.ini -> build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/configs
creating build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/fail-no-plugin.txt -> build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/fail-plugin.txt -> build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/success-no-plugin.txt -> build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/success-plugin.txt -> build/lib.macosx-10.13-universal2-cpython-313/thinc/tests/mypy/outputs
copying thinc/extra/tests/c_test_search.pyx -> build/lib.macosx-10.13-universal2-cpython-313/thinc/extra/tests
running build_ext
building 'thinc.backends.cblas' extension
creating build/temp.macosx-10.13-universal2-cpython-313/thinc/backends
clang++ -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -I/private/var/folders/wy/tp92t7_s27lgtszstrzhxl4w0000gn/T/pip-build-env-5k8ewgz5/overlay/lib/python3.13/site-packages/numpy/_core/include -I/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13 -I/Users/I072857/Documents/git/gstack/gstack_env/include -I/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13 -c thinc/backends/cblas.cpp -o build/temp.macosx-10.13-universal2-cpython-313/thinc/backends/cblas.o -O3 -Wno-strict-prototypes -Wno-unused-function -std=c++11
thinc/backends/cblas.cpp:871:59: warning: 'Py_UNICODE' is deprecated [-Wdeprecated-declarations]
871 | static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
| ^
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/unicodeobject.h:10:1: note: 'Py_UNICODE' has been explicitly marked deprecated here
10 | Py_DEPRECATED(3.13) typedef wchar_t Py_UNICODE;
| ^
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/pyport.h:251:54: note: expanded from macro 'Py_DEPRECATED'
251 | #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
| ^
thinc/backends/cblas.cpp:872:11: warning: 'Py_UNICODE' is deprecated [-Wdeprecated-declarations]
872 | const Py_UNICODE *u_end = u;
| ^
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/unicodeobject.h:10:1: note: 'Py_UNICODE' has been explicitly marked deprecated here
10 | Py_DEPRECATED(3.13) typedef wchar_t Py_UNICODE;
| ^
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/pyport.h:251:54: note: expanded from macro 'Py_DEPRECATED'
251 | #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
| ^
thinc/backends/cblas.cpp:1908:22: error: use of undeclared identifier '_PyList_Extend'; did you mean 'PyList_Extend'?
1908 | PyObject* none = _PyList_Extend((PyListObject*)L, v);
| ^~~~~~~~~~~~~~
| PyList_Extend
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/listobject.h:52:17: note: 'PyList_Extend' declared here
52 | PyAPI_FUNC(int) PyList_Extend(PyObject *self, PyObject *iterable);
| ^
thinc/backends/cblas.cpp:1908:37: error: cannot initialize a parameter of type 'PyObject *' (aka '_object *') with an rvalue of type 'PyListObject *'
1908 | PyObject* none = _PyList_Extend((PyListObject*)L, v);
| ^~~~~~~~~~~~~~~~
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/listobject.h:52:41: note: passing argument to parameter 'self' here
52 | PyAPI_FUNC(int) PyList_Extend(PyObject *self, PyObject *iterable);
| ^
thinc/backends/cblas.cpp:1946:39: error: use of undeclared identifier '_PyInterpreterState_GetConfig'
1946 | __pyx_assertions_enabled_flag = ! _PyInterpreterState_GetConfig(__Pyx_PyThreadState_Current->interp)->optimization_level;
| ^
thinc/backends/cblas.cpp:20354:27: error: no matching function for call to '_PyLong_AsByteArray'
20354 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ^~~~~~~~~~~~~~~~~~~
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/longobject.h:111:17: note: candidate function not viable: requires 6 arguments, but 5 were provided
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^ ~~~~~~~~~~~~~~~~
112 | unsigned char* bytes, size_t n,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
113 | int little_endian, int is_signed, int with_exceptions);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
thinc/backends/cblas.cpp:20550:27: error: no matching function for call to '_PyLong_AsByteArray'
20550 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ^~~~~~~~~~~~~~~~~~~
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/longobject.h:111:17: note: candidate function not viable: requires 6 arguments, but 5 were provided
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^ ~~~~~~~~~~~~~~~~
112 | unsigned char* bytes, size_t n,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
113 | int little_endian, int is_signed, int with_exceptions);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
thinc/backends/cblas.cpp:20822:27: error: no matching function for call to '_PyLong_AsByteArray'
20822 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ^~~~~~~~~~~~~~~~~~~
/Library/Frameworks/Python.framework/Versions/3.13/include/python3.13/cpython/longobject.h:111:17: note: candidate function not viable: requires 6 arguments, but 5 were provided
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^ ~~~~~~~~~~~~~~~~
112 | unsigned char* bytes, size_t n,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
113 | int little_endian, int is_signed, int with_exceptions);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2 warnings and 6 errors generated.
error: command '/usr/bin/clang++' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for thinc
Failed to build thinc
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (thinc)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
Try deleting your unity project and making a new one. Unity says to use conda so try that too. Use python 3.9.
along with the fixed position and 100% width you use z-index: 100;
You probably close the stream (try-with-resource) when you throw an exception. Try to define a separate stream and wrap it with StreamingResponseBody and then the exception will be handled correctly @ControllerAdvice.
StreamingResponseBody responseBody = outputStream -> {
try (ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()) {
//some logic here
byteArrayOutputStream.writeTo(outputStream);
} catch (IOException e) {
//byteArrayOutputStream will be closed but not outputStream
log.error("Error occurred while streaming: {}", e.getMessage(), e);
throw new CustomException("Error due to I/O issues.", e);
}
}
Try to send an ACK after a message is received in the consumer.
For me the root cause of the issue was that "Use legacy console" was enabled in the windows command prompt (which I was not even using).
The file %USERPROFILE%\AppData\Local\rancher-desktop\logs\wsl-exec.log
had a statement about that (I cannot paste it because after fixing the problem, the log was wiped)
This page has the instructions on how to disable the legacy console mode: https://winaero.com/bash-in-windows-10-fix-unsupported-console-settings/
The instructions are for windows 10, but they worked in windows 11 for me as well.
OK, I have check module code (on https://github.com/ansible-collections/community.general/blob/main/plugins/modules/xml.py#L847)
And at line 435, comment mentioned that lxml 5.1.1 has deprecated _ElementStringResult. If I downgrade lxml to 5.1.0, script is OK and document is change as I want !
Ansible module can no longer use lxml after version 5.1.0
I have open an issue on github for that. Thanks
I got this error while using redux-persist, I moved the types from other files (ie external modules) to above of the declaration of that variable throwing ts-error
It was complaining about two types(InitialStateOfControlSlice
, InitialStateOfProfileSlice
), so I just made sure they are in the same file.
We should avoid using production cred. When using firebase emulators, there is no need of production specific configuration.
If you look though the files of the "inference" library, you may come across a function called download
in a class InferenceModel
. This function is intended for downloading files, but not in the ONNX format. However, the API key does not seem to work for it and throws an error.
def download(self, format="pt", location="."):
"""
Download the weights associated with a model.
Args:
format (str): The format of the output.
- 'pt': returns a PyTorch weights file
location (str): The location to save the weights file to
"""
supported_formats = ["pt"]
if format not in supported_formats:
raise Exception(f"Unsupported format {format}. Must be one of {supported_formats}")
workspace, project, version = self.id.rsplit("/")
# get pt url
pt_api_url = f"{API_URL}/{workspace}/{project}/{self.version}/ptFile"
r = requests.get(pt_api_url, params={"api_key": self.__api_key})
r.raise_for_status()
pt_weights_url = r.json()["weightsUrl"]
response = requests.get(pt_weights_url, stream=True)
# write the zip file to the desired location
with open(location + "/weights.pt", "wb") as f:
total_length = int(response.headers.get("content-length")) # type: ignore[arg-type]
for chunk in tqdm(
response.iter_content(chunk_size=1024),
desc=f"Downloading weights to {location}/weights.pt",
total=int(total_length / 1024) + 1,
):
if chunk:
f.write(chunk)
f.flush()
return
Here is server answer
{
"error": "Not authorized to download this model in pt format."
}
Maybe somebody have ideas how to do it?
You can access the Kubernetes API to retrieve information about your node where your pod is running.
kubectl get node <node name> -o json
By running the command you'll be able to see the NodeInfo on the lower part such as NodeID
, bootID
, machineID
, systemUUID,
etc.
I think it might have been changed in recent versions of moviepy to remove the .editor
part. The work around for me was to be specific:
from moviepy import (
ImageClip,
TextClip,
CompositeVideoClip,
AudioFileClip,
concatenate_videoclips
)
I found a solution by going to Settings > Media then unchecking "Organize my uploads into month- and year-based folders"enter image description here
There may be a conflict in the Python interpreter Path between the actual one and the one in the program. The problem could also be due to overlapping virtual environments.
You can search for the Python Path and replace it in PyCharm settings. If the issue persists, try deleting the virtual environment and reconfiguring it.
You could use JavaRX and a Maybe.zip() to get something similar to what scala offers.
Maybe.zip(
Maybe.fromOptional(opt1),Maybe.fromOptional(opt2),
(value1,value2) -> value1.equals(value2))
.defaultIfEmpty(false)
Turns out I just needed to do:
flutter config --jdk-dir "C:\Program Files\Android\Android Studio\jbr"
A Bodega Independencia lll le encantaría recibir tus comentarios. Publica una opinión en nuestro perfil. https://g.page/r/CeLu3jm0JEtGEBE/review
Did you solve this problem? I have a similar problem now and I can't solve it.
If i understand the question correctly, you have pdf.js setup as a separate script file.
I suggest that you use vite along with vite-plugin-singlefile to bundle all necessary js into a single html file.
The status code returned (403) indicates that AWS CloudFront received the request but cannot respond with the requested content.
You mentioned that in another location, you not receive this error. To me, this suggests a possible geo-restriction issue.
In this documentation available on the AWS website, say that AWS CloudFront can force geographic restrictions, such as only allowing edge servers to send files to users with IPs from Australia.
By default, all content served by CloudFront is accessible worldwide.
since PHP 8.1 strftime is deprecated. here my one-liner for a random birthdate Age 18-90
$birthdate = date('Y-m-d', random_int((new Datetime('90 years ago'))->format('U'), (new DateTime('18 years ago'))->format('U')))
Here is a ready lib templates
https://github.com/H1l4nd0r/php_templates/tree/master
I created it several years ago, and still using it (inside it creates a tree and then builds it)
With Quarto 1.6 release, there is now a landscape mode for pdf, docx, and typst formats without doing anything special on the user side:
This will appear on a portrait page.
::: {.landscape}
This will appear on a landscape page.
:::
This will appear on another portrait page.
"Verify API endpoint URLs are absolute. Check CORS headers on your API server. Ensure GitHub Pages settings are correct. Inspect browser console errors and network requests. Consider proxying API requests. Provide more details:
Thanks to @wep21 on GitHub, I have a fix for this:
In the deps
for cc_library
and cc_binary
targets, we need to include both "@ncurses//:ncurses_headers",
and @ncurses
. Also, files need to #include <curses.h>
instead of #include <ncurses.h>
. See this commit for more details.
Further, the functions refresh
and box
are not available (though I am not sure why), and these functions need to be replaced with calls to wrefresh
and wborder
. See this commit for more details.
I believe you should replace line 43 in the Dockerfile:
RUN npm install -g npm
with the command:
RUN npm install -g pnpm
Also, modify line 115 in the bootstrap.sh file:
npm install --loglevel verbose
to:
pnpm install --loglevel verbose
This is what I think, and I wish you success!!
You can configure configurable shell profiles with commands and variables using SSM Session Manager preferences.
Did you find the solution? I have the same problem
The key here is to parse openApi JSON file for the desired content. You could afterwards convert it to a format used by your archbee documentation platform, here Markdown, but in separate files so you can do manually add content.
For that you could use a script in your favourite language (eg. Python) with parsing modules. Of course if the json is only available thorough the internal network, the script will only be able to fetch the data while on it.
Another factor is how you want to automate the process, you may be able to trigger the script automatically for a certain period in time for example every day or each time the JSON file gets modified.
Instead of opening 7 terminals every 3 seconds, why not open one terminal?
adb shell "settings get global airplane_mode_on && settings get system system_locales && ... && exit"
Modify your constructor :
@Autowired public StarshipService() { this.webClient= WebClient.builder().baseUrl("https://www.swapi.tech/api").build(); }
For what its worth, and even though this post may best be posted in a different thread, I'll still risk the post delete. I think, for a simple example, a reverse-proxy VPN software, like tailscale.com, could act as a surrogate iDP. Insread of having to chose between using either the "IDP side user end" sign-on authentication OR the "SP Initiated sign-on" option, a single reverse-proxy VPN could address both choices, simultaneously, and completely
I ended up with wrapping runner into my own container: https://github.com/mystdeim/github-runner. It's pretty easy, I hope my container will help you. All you need is PAT token.
Thanks to @wep21 on GitHub, I have a fix for this:
In the deps
for cc_library
and cc_binary
targets, we need to include both "@ncurses//:ncurses_headers",
and @ncurses
. Also, files need to #include <curses.h>
instead of #include <ncurses.h>
. See this commit for more details.
Further, the functions refresh
and box
are not available (though I am not sure why), and these functions need to be replaced with calls to wrefresh
and wborder
. See this commit for more details.
Nowadays, it's significantly simpler than answers to the undermentioned purport:
...Specifically:
Select Menu Bar > "View" > "Command Palette" (Control+Shift+P).
Select ".NET: New Project":
Choose the desired type:
Choose the desired location for the files that it shall generate to reside. You should create a directory and choose that (instead of choosing $HOME
, as an example) for it shan't automatically create a subdirectory for its files:
Name the project:
Confirm the location of the project to initiate its creation:
Your project is now available to be invoked:
Some codes end up being base64, sometimes they represent special formatting before they even represent the data itself.
for example :
converting data to TLV formatting and then to base64
and when you decode it you will notice that it is not representing the correct data.
so I believe they take the data and manipulate it then convert it to Base64.
so no one can read it except their device.
the only way to read that base64 is to know how to arrange the data after converting it to hex or bytes, or the name of the algorithm they use before converting it to base64.
and also you should provide some extra information like the API, the company's website, and the things you think will provide some ideas.
Creating a unique index on the same column as a clustered primary key column is unnecessary. Creating a clustered primary key on a table automatically creates a coinciding unique clustered index on the table. See the “Primary and foreign key constraints” article for reference. Adding an additional unique, non-clustered index to the table adds to the table’s data storage. The additional index also adds an additional to your table maintenance and index maintenance work.
sys.key_constraints and sys.indexes query
Once you drop the duplicative unique index, IX_Products_1, your query should utilize the clustered index on the primary key. Even if you leave in the unique index on the table, the query optimizer automatically builds the most efficient query plan it can estimate based on your query. It is not guaranteed that the optimizer will choose the unique index over the clustered index, or vice versa. See the “Logical and physical showplan operator reference” article for a listing of operations used by the query optimizer.
I have working code which i am running in production. The x and y variables you are using in pan function are same as xOffset and yOffset in pinch function. So remove x and y variables and just use xOffset and yOffset instead of them.
Hope it helps.
reveal_type
is a MyPy-specific construct for debugging types and isn't part of PEP 484. You should define a reusable type alias instead.
from typing import Callable
# Define the type alias
MyCallback = Callable[[int], bool]
# Your function definitions
def stream(b: int, f: MyCallback) -> bool:
return f(b)
I think that I have done the proper script for what I wanted to achieve...
```#!/bin/bash
WATCH_DIR="/home/media/Movies"
DELAY=20
LAST_EVENT_TIME=0
# FileBot Command
FILEBOT_CMD="filebot -rename -r -non-strict \"$WATCH_DIR\" --db TheTVDB --format \"{n}/Season {s}/{n} >
# All done function
finalize_process() {
CURRENT_TIME=$(date +%s)
if (( CURRENT_TIME - LAST_EVENT_TIME >= DELAY )); then
echo "All done, running script!"
# eval $FILEBOT_CMD
fi
}
# Monitor
inotifywait -m -r -e close_write --format '%w%f' "$WATCH_DIR" | while read -r file; do
# Current timestamp
EVENT_TIME=$(date '+%Y-%m-%d %H:%M:%S')
# Echo for each event
echo "Change detected for $file at $EVENT_TIME"
# Last event time
LAST_EVENT_TIME=$(date +%s)
# Update past final message if exists
if [[ -n "$finalization_pid" && -e /proc/$finalization_pid ]]; then
kill $finalization_pid
fi
# Wait 20 seconds end verify the ending process
(
sleep "$DELAY"
finalize_process
) &
finalization_pid=$!
done```
This script shows me an echo after every trigger, then waits 20 seconds and if there is no other trigger, it will fire a final echo with an optional command...
If someone have an optinion about this script, how to optimize it more, or if there could be a different approach, a better one, please share it.
About the double trigger "issue" with inotifywait, I made more tests and if I monitor the folder with "inotifywait -m", for event "close_write", it is acting a bit strange. Lets say that I will copy 5 files (file 1, 2, 3, 4, 5). I will get the echo for every trigger of finished file, but after 12-15 seconds of the "file 1" trigger, it will show the same trigger again. For a better exlpication, i will show you a result echos by copying 5 files:
Jan 01 23:23:48 mediaserver auto_filebot.sh[4275]: Watches established.
Jan 01 23:24:12 mediaserver auto_filebot.sh[4276]: Change detected for /home/media/Movies/test/Episode 1.mkv at 2025-01-01 23:24:12
Jan 01 23:24:17 mediaserver auto_filebot.sh[4276]: Change detected for /home/media/Movies/test/Episode 2.mkv at 2025-01-01 23:24:17
Jan 01 23:24:22 mediaserver auto_filebot.sh[4276]: Change detected for /home/media/Movies/test/Episode 3.mkv at 2025-01-01 23:24:22
Jan 01 23:24:26 mediaserver auto_filebot.sh[4276]: Change detected for /home/media/Movies/test/Episode 1.mkv at 2025-01-01 23:24:26
Jan 01 23:24:28 mediaserver auto_filebot.sh[4276]: Change detected for /home/media/Movies/test/Episode 4.mkv at 2025-01-01 23:24:28
Jan 01 23:24:32 mediaserver auto_filebot.sh[4276]: Change detected for /home/media/Movies/test/Episode 5.mkv at 2025-01-01 23:24:32
Jan 01 23:24:52 mediaserver auto_filebot.sh[4361]: All done, running script!
Any idea why inotifywait doubles the first trigger after 12-15 seconds?
Thank you for the help
try Prisma.EventFieldRefs
however you will have to overwrite relations fields as they are of String type. Otherwise read more
There is a known bug in PyCharm, related to Python 3.10: https://youtrack.jetbrains.com/issue/PY-54447/
This only appears in Debug mode.
For me, downgrading to Python 3.9 worked as a workaround until the issue is fixed.
If this is your case, you can give it a try.
when you save excel file ,you can choose CSV UTF-8 (Comma delimited) as save as type.
Is there something that's preventing you from just using seaborn directly?
def plt1():
# Added this example confusion matrix.
c_mtrx_N = pd.DataFrame([[1, 2], [3, 4]], index=['Actual 0', 'Actual 1'], columns=['Predicted 0', 'Predicted 1'])
plt.figure(figsize=(4,4))
# c_mtrx_N = pd.crosstab(y_test, y_pred_N, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(c_mtrx_N, annot=True, fmt = '.3g')
plt.show()
plt1()
Also, sns.set() is deprecated. Use sns.set_theme() instead.
import { useNavigate } from 'react-router-dom';
function GoBackButton() { const navigate = useNavigate();
return <button onClick={() => navigate(-1)}>Назад; }
// In the context of single-page applications, the navigate(-1) method emulates the behavior of the history method.go Back(), which makes it easier to navigate through the browser history.
This has been fixed in mypy version v0.520 (Jul 2017) with Pull Request #3451
string s = " Bob Loves Alice "
string[] allwords = s.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
Using suggestion from Ivo Velitchkov I have been able to update the code in my original question to form a working TED URL.
import sparqldataframe
import pandas as pd
# Define the SPARQL query
sparql_query = """
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX epo: <http://data.europa.eu/a4g/ontology#>
PREFIX cccev: <http://data.europa.eu/m8g/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX dcterms: <http://purl.org/dc/terms/>
SELECT DISTINCT ?publicationNumber ?legalName ?publicationDate ?title ?description WHERE {
GRAPH ?g {
?notice a epo:Notice ;
epo:hasPublicationDate ?publicationDate ;
epo:hasNoticePublicationNumber ?publicationNumber ;
epo:announcesRole [
a epo:Buyer ;
epo:playedBy [
epo:hasLegalName ?legalName ;
cccev:registeredAddress [
epo:hasCountryCode ?countryUri
]
]
] ;
epo:refersToProcedure [
dcterms:title ?title ;
dcterms:description ?description
] .
}
?countryUri a skos:Concept ;
skos:prefLabel "Ireland"@en .
FILTER(CONTAINS(LCASE(STR(?legalName)), "dublin city council"))
}
ORDER BY ?publicationDate
"""
# SPARQL endpoint URL
endpoint_url = "https://publications.europa.eu/webapi/rdf/sparql"
# Execute the SPARQL query
try:
df = sparqldataframe.query(endpoint_url, sparql_query)
if not df.empty:
# Build the TED URL based on the publication number and drop leading zeros
df['noticeTEDuri'] = df['publicationNumber'].apply(
lambda x: f"https://ted.europa.eu/en/notice/-/detail/{int(x.split('-')[0])}-{x.split('-')[-1]}"
)
# Display the results
print("Tender Details with TED URLs for Dublin City Council:")
print(df[['publicationNumber', 'legalName', 'publicationDate', 'title', 'description', 'noticeTEDuri']])
else:
print("No tenders found for Dublin City Council.")
except Exception as e:
print("An error occurred while querying the SPARQL endpoint:", str(e))
This gives a result where a working TED url is constructed using the publicationNumber.
TED Publication Number
TED URL
What I am looking for is a field that I can hopefully retrieve using SPARQL query
I got an answer to my question from the Ollama community on Discord.
"localhost" means two different things inside vs. outside of the docker container. Open-webui sees locahost as local to the conainter. My laptop sees localhost as local to the laptop (which does not include the container).
So not all localhosts are equal. Now I need to figure out how to expose a virtual interface on the laptop that is accessible to the docker container, but not accessible from other hosts on the physical LAN.
It sounds like I need to learn more about how to manipulate the docker.internal network.
I've had this same question. I've found that you could also pass down a function into a child component. Watch for a state change of the function from the parent component with a useEffect. And when the function is invoked in the child component, it will force a rerender in the parent component for the child component.
I have downgraded leaflet to v1.8.0 to make this work with both JavaFX 23 and 21.
Reference types are inherently nullable compared to value types, and so you should always be checking for null of reference types regardless of the presence of the Nullable boxed type. You should only be using the Nullable type for values types that need to be boxed. Thus, adding the Nullable boxed type to the return of the IEnumerator.Current
does not add any value to the code.
This script shows how to split a column of "postal code" in a Pandas DataFrame into two columns: Postal code
and postal_ext
. The function is implemented with accounting for the presence or absence of an extension and for null values. For example, the code 56789-2345 splits into 56789 and 2345. Codes like 45675 are not subject to change. You can also explore how postal codes are structured in specific regions, such as the Islamabad postal code, for better understanding and comparison.
I am having the same issue, we have secret engine. for ex : kv-v2/app/dev/app/global
under this I have app specific folders in the vault, like app1, app2. Each folder multiple secret keys and values.
how to approach to this.. I am using approle.
no-symkey-cache
max-cache-ttl 0
I'm facing a similar issue, how did you end up fixing it?
After renaming the FooService to something else without "Service", the issue is resolved.
It seems you are experiencing problems with the Pinterest PHP Bot script while commenting on pins. If you still need more tools for Pinterest, you may like a Pinterest image downloader online. It lets you save images from Pinterest in a few easy steps, which can be a good addition to your script if you're planning to automate Pinterest content-related tasks. Let us know if you find a solution for the commenting issue!
Ran into this issue today (01/01/2025)
Solved by cleaning gradle cache (gradlew clean) + deleting .gradle, .kotlin, and .idea folders manually.
Not an elegant solution.
I'm surprised nobody mentioned margin-block-start and margin-block-end but they should be used with both elements.
<p style="margin-block-end:0px">Line 1</p>
<p style="margin-block-start:0px">Line 2</p>
Another simple alternative is to just start the application via Run As -> Java Application
(rather than Run As -> Spring Boot App
) as this produces console output without all of the Spring codes and without having to modify any configuration details.
This answer is 16 years too late for the original question, but in case anyone has the same problem, then Kinetic Merge exists for this situation: https://github.com/sageserpent-open/kineticMerge.
The idea is to merge changes made on one branch (the QA/bug fix/release candidate branch) through code motion made on the other branch (the mainline where refactorings have moved things around).
Full disclosure: I’m the author of that software.
Fixed it... For some reason I had provider: giscus
in the YAML, which caused the following error in Vercel: duplicated mapping key in "/vercel/path0/_config.yml" at line 121, column -9:
.
comments:
provider: giscus # <---- culprit here
giscus:
repo: w-dib/w-dib.github.io
repo_id: R_kgDONaq1kA
category: Announcements
category_id: DIC_kwDONaq1kM4Clp6n
mapping: pathname
strict: 0
input_position: bottom
lang: en
reactions_enabled: 1
I need a coffee
You need to do like this
Navigator.of(context).push(context,MaterialPageRoute(builder: (context) => YourScreen())
Simply Add this CSS:
color: transparent; // for transparent text
-webkit-text-stroke: 1px red; //for text border
Well, Xcode is a fickle mistress... been tearing my hair out over the issue and finally decided to close and reopen Xcode. imagine my surprise the error has cleared then I remembered that I had a code error that caused the UI problem, identified it and corrected but the error persisted. took a reboot of Xcode to clear it so I guess the error was stuck. Sorry if I wasted any of your time reviewing this and thanks for being here.
In Python, exceptions are generally intended for exceptional situations, not as a regular control flow mechanism. Here's why using exceptions as the expected outcome can be considered unpythonic:
More Pythonic (using conditional statement): def divide(a, b): if b == 0: return 0 else: return a / b
In summary: While exceptions can be used in limited cases for control flow, they should generally be reserved for truly exceptional situations. Prioritize clear, concise, and efficient code using Python's built-in control flow mechanisms.
This turned out to be a Proxmox issue, not an Ansible one.
Proxmox LXC configuration files accept both <key>: <value
and = . However, it automatically converts the latter into the former. This is why
lineinfile` was not working.
I am still slightly unsure as to why blockinfile
was producing marker lines at the top of the file and placing the block at the bottom, but lineinfile
is working as expected using the <key>: <value>
format.
It seems you want to "edit" the apalike style by replacing the "," with the "/". Have a look at this question (but you need to be comfortable with LaTeX...)
I have the same Problem here: java.io.UncheckedIOException: java.io.IOException: Failed to create a new filesystem for /opt/keycloak/lib/lib/deployment/org.fusesource.jansi.jansi-1.18.jar did u found a solution to this?
first you create an Intent in your function where must bring you to the 2 activity this code is in your first activity, before open the seconde activity example of an onClik event Button
Intent intent = new Intent(this, DashboardActivity2.class);
DashboardActivity2.class is the 2 activity that will open then you need to create an ArrayList to store
all your Items. We can pick this data from others sources as an database or something else, in this case we create an ArrayList and we put in the values manually.
ArrayList<String> cars = new ArrayList<String>();
cars.add(0,"I"); //Item "I" will be at position 0
cars.add(1,"'");
cars.add(2,"m");
cars.add(3,"o");
cars.add(4,"s"); //Here the element N°4 will be "s"
cars.add(5,"l");
cars.add(6,"a");
cars.add(7,"r");
cars.add(8,"C");
cars.add(9," ");
now you need to import our list in to the Intent you created remember i give Intent name as intent, with lower "i", but you can give the name you wish.
intent.putExtra("list",cars);
We give a Key to our List named "list" putExtra("KEY",OBJECT),the Object is our List
startActivity(intent);
Then we can start the new activity by calling the startActivity(intent);
In our seconde activity we get this DATA
protected void onCreate(Bundle savedInstanceState){
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_dashboard2);
we have a button to receive the text, this Button was created inside XML connected file to seconde activity
Button button2 =findViewById(R.id.button2);
we create a new Intent i used same name as first Activity but not mandatory, and i said this Intent intent is equal getIntent from first activity, like a Copy I don't Know if more then 1 intent it's possible and how catch them if multiple, i think it's only possible 1 Intent and you can send all resources from one to another activity using it
Intent intent=getIntent();
Then i create a new ArrayList and i give it same name "cars" as in first activity not Mandatory
ArrayList<String> cars = new ArrayList<>();
now we send inside ArrayList "cars" all data from the first activity list by get an StringArrayList because it's an ArrayList we are receiving we using the Object "list" Because it's the Key we have done in the first activity Intent intent
cars= intent.getExtras().getStringArrayList("list");
Then we set Button Text by geting Items in our List, using their Position on the list
button2.setText(cars.get(0).toString()+
cars.get(1).toString()+
cars.get(2).toString()+
cars.get(9).toString()+
cars.get(8).toString()+
cars.get(6).toString()+
cars.get(7).toString()+
cars.get(5).toString()+
cars.get(3).toString()+
cars.get(4).toString()
);
RESULT When OPEN SECOND ACTIVITY