The solution for this question is a custom project which I made which makes it possible to sanitize data from the logging.
See
- https://github.com/StefH/SanitizedHttpLogger
- https://www.nuget.org/packages/SanitizedHttpClientLogger
- https://www.nuget.org/packages/SanitizedHttpLogger
And see this blogpost for more explanation and details:
- https://mstack.nl/blogs/sanitize-http-logging/
Has this issue been resolved? I'm having the same problem.
So, the solution I arrived at was to use reticulate.
If someone has a pure R solution that follows a similar pattern, I would still be interested in hearing it and changing the accepted solution.
reticulate::py_require("polars[database]")
reticulate::py_require("sqlalchemy")
polars <- reticulate::import("polars")
sqlalchemy <- reticulate::import("sqlalchemy")
engine <- sqlalchemy$create_engine("sqlite:///transactions.sqlite3", future = TRUE)
dataframe <- polars$DataFrame(data.frame(x = 1:5, y = letters[1:5]))
with(
engine$begin() %as% conn,
{
dataframe$write_database("table_a", conn, if_table_exists = "append")
dataframe$write_database("table_b", conn, if_table_exists = "append")
dataframe$write_database("table_c", conn, if_table_exists = "append")
stop("OOPS :(")
}
)
Note: there was a bug in with() which the maintainers were kind enough to fix within a day, and this now works (i.e. the whole transaction is rolled-back upon error) with the latest branch.
A line with a - in front of it will not make it to the new file.
A line with a + in front of it is not in the old file.
A line with no sign is in both files.
Ignore the wording:
If you want a - line to make it to the new file, delete the - but carefully leave an empty space in its place.
If you want a + line to not make it to the new file – just delete the line.
What could be simpler?
Don't forget to change the two pairs of numbers at the top so that, for each pair, the number to the right of the comma is exactly equal to the number of lines in the hunk for its respective file, or else the edit will be rejected. That was too much of a mouthful so they didn't bother explaining it.
if I have 2 (or more - range loop generated) buttons calling the same callback, how do I know which one fired the event? How do I attach any data to the event?
By just looking at your screenshot, the chances are high that you might use some CSS transform property on the component which leads to a scaling "bug" as transform is more for svg graphics than layouting.
for example:
transform: translateY(max(-50%, -50vh));
Try to use flex layouting instead
You could turn the reference to Document into a onetoone instead of foreignkey, and that way you would have the option to set the cascadeDelete parameter to true.
If you are not allowed to alter the data model and drop the database you would need to create an upgrade trigger.
Gotta love Multi platform tools that don't follow platform standards. C:\ProgramData, although not quite kosher, works just fine.
I came accross this looking for a way to skip a non picklable attribute and based on JacobP's answer I'm using the below. It uses the same reference to skipped as the original instance.
def __deepcopy__(self, memo):
cls = self.__class__
obj = cls.__new__(cls)
memo[id(self)] = obj
for k, v in self.__dict__.items():
if k not in ['skipped']:
v = copy.deepcopy(v, memo)
setattr(obj, k, v)
return obj
Hooks in CRM software are automation triggers that allow you to connect your CRM with other applications or internal workflows. They save time, reduce manual work, and ensure smooth data flow across systems. Here’s how you can add hooks into a CRM:
Identify Key Events
Decide which events should trigger a hook, such as:
When a new lead is created
When a deal is closed
When an invoice is generated
When an employee’s attendance is marked
Use Webhooks or APIs
Most modern CRMs provide webhook or API integrations. A webhook pushes data to another application when a defined event occurs.
Example: If a new lead is added in CRM, a webhook can automatically send that lead’s details to your email marketing tool.
Configure the Destination App
Decide where the data should go. Hooks can integrate your CRM with:
Email automation tools
Accounting software
HR or payroll systems
Inventory management solutions
Test the Workflow
Automate & Scale
By choosing a flexible platform like SYSBI Unified CRM, businesses can easily add hooks, streamline processes, and connect multiple operations without relying on separate tools.
Actually These 3 input box are like parameters for vcvarsall.bat
So there's a hacky workaround: specify versions in any input box, as long as vcvarsall.bat recognize it:
Well, looks like we had to copy over some more code from staging to live.
Then it worked. But the error is not very clear about what the problem is...
in file project file add
<PropertyGroup>
<EnableDefaultContentItems>false</EnableDefaultContentItems>
</PropertyGroup>
this It is forbidden SDK to add files Content automatic , and this save only you write in <Content Include="..." />
I eventually found a solution.
I think it's not clean but it works.
It use Installing the SageMath Jupyter Kernel and Extensions
venv/bin/python
>>> from sage.all import *
>>> from sage.repl.ipython_kernel.install import SageKernelSpec
>>> prefix = tmp_dir()
>>> spec = SageKernelSpec(prefix=prefix)
>>> spec.kernel_spec()
I correct each error by a symbolic link.
sudo ln -s /usr/lib/python3.13/site-packages/sage venv/lib/python3.13/site-packages/
sudo ln -s /usr/lib/python3.13/site-packages/cysignals venv/lib/python3.13/site-packages/
sudo ln -s /usr/lib/python3.13/site-packages/gmpy2 venv/lib/python3.13/site-packages/
sudo ln -s /usr/lib/python3.13/site-packages/cypari2 venv/lib/python3.13/site-packages/
sudo ln -s /usr/lib/python3.13/site-packages/memory_allocator
And finally,
\>>> spec.kernel_spec()
{'argv': ['venv/bin/sage', '--python', '-m', 'sage.repl.ipython_kernel', '-f', '{connection_file}'], 'display_name': 'SageMath 10.7', 'language': 'sage'}
I put this ting in
/usr/share/jupyter/kernels/sagemath/kernel.json.in
And it works.
Original poster of the question here.
The reason why the ComboBox wasn't showing any items was because I seem to have missed the DataGridView's ReadOnly and left it on True.
After changing it to False, the ComboBox worked perfectly.

Here's the code:
DataGridViewComboBoxColumn column = new DataGridViewComboBoxColumn();
column.Items.Add("実案件");
column.Items.Add("参考見積り");
column.DataPropertyName = dataGridView_検索.Columns["見積もり日区分"].DataPropertyName;
dataGridView_検索.Columns.Insert(dataGridView_検索.Columns["見積もり日区分"].Index, column);
dataGridView_検索.Columns.Remove("見積もり日区分");
column.Name = "見積もり日区分";
column.HeaderText = "見積もり日区分";
column.FlatStyle = FlatStyle.Flat;
column.DisplayStyle = DataGridViewComboBoxDisplayStyle.ComboBox;
column.DefaultCellStyle.BackColor = Color.FromArgb(255, 255, 192);
column.MinimumWidth = 150;
When a path parameter is present and contains a very long path, the API often ignores the visible parameter, then adjusts the map's center so that the entire path is still visible.
Considering that you only want to show a specific segment, the most reliable workaround would be to use the center and zoom parameters:
zoom=18¢er=51.47830481493033,5.625173621802276&key=XXX
Issue resolved by simply following this [youtube video](https://www.youtube.com/watch?v=QuN63BRRhAM), Its officially from expo
- see my current package.json
{
"name": "xyz",
"version": "1.0.0",
"scripts": {
"start": "expo start --dev-client",
"android": "expo run:android",
"ios": "expo run:ios",
"web": "expo start --web"
},
"dependencies": {
"@expo/vector-icons": "^15.0.2",
"@react-native-async-storage/async-storage": "2.2.0",
"@react-native-community/datetimepicker": "8.4.4",
"@react-native-community/netinfo": "^11.4.1",
"@react-navigation/native": "^6.1.18",
"@react-navigation/stack": "^6.3.20",
"@supersami/rn-foreground-service": "^2.2.1",
"base-64": "^1.0.0",
"date-fns": "^3.6.0",
"expo": "^54.0.10",
"expo-background-fetch": "~14.0.6",
"expo-build-properties": "~1.0.7",
"expo-calendar": "~15.0.6",
"expo-camera": "~17.0.7",
"expo-dev-client": "~6.0.11",
"expo-font": "~14.0.7",
"expo-gradle-ext-vars": "^0.1.2",
"expo-image-manipulator": "~14.0.7",
"expo-image-picker": "~17.0.7",
"expo-linear-gradient": "~15.0.6",
"expo-location": "~19.0.6",
"expo-media-library": "~18.2.0",
"expo-sharing": "~14.0.7",
"expo-status-bar": "~3.0.7",
"expo-task-manager": "~14.0.6",
"expo-updates": "~29.0.9",
"framer-motion": "^11.5.4",
"jwt-decode": "^4.0.0",
"react": "19.1.0",
"react-dom": "19.1.0",
"react-native": "0.81.4",
"react-native-background-fetch": "^4.2.7",
"react-native-background-geolocation": "^4.18.4",
"react-native-calendars": "^1.1306.0",
"react-native-gesture-handler": "~2.28.0",
"react-native-jwt": "^1.0.0",
"react-native-linear-gradient": "^2.8.3",
"react-native-modal-datetime-picker": "^18.0.0",
"react-native-month-picker": "^1.0.1",
"react-native-reanimated": "~4.1.1",
"react-native-reanimated-carousel": "^4.0.3",
"react-native-safe-area-context": "~5.6.0",
"react-native-screens": "~4.16.0",
"react-native-vector-icons": "^10.1.0",
"react-native-view-shot": "~4.0.3",
"react-native-webview": "13.15.0",
"react-native-worklets": "0.5.1",
"react-swipeable": "^7.0.1",
"rn-fetch-blob": "^0.12.0"
},
"devDependencies": {
"@babel/core": "^7.20.0",
"@babel/plugin-transform-private-methods": "^7.24.7",
"local-ip-url": "^1.0.10",
"rn-nodeify": "^10.3.0"
},
"resolutions": {
"react-native-safe-area-context": "5.6.1"
},
"private": true,
"expo": {
"doctor": {
"reactNativeDirectoryCheck": {
"exclude": [
"@supersami/rn-foreground-service",
"rn-fetch-blob",
"base-64",
"expo-gradle-ext-vars",
"framer-motion",
"react-native-jwt",
"react-native-month-picker",
"react-native-vector-icons",
"react-swipeable"
]
}
}
}
}
Just in case someone come to this page on the same reason as I do. I migrated application to Java 17, but my services on Ignite are still on Java 11 for some reason. Calling that service throws an exception "Ignite failed to process request [142]: Failed to deserialize object [typeId=-1688195747]"
The reason was that I'm using stream method toList() in my Java 17 app and call service on Ignite with argument that contains such List. Replacing with collect(Colelctors.toList()) solved the issue.
No, the total size of your database will have a negligible impact on the performance of your queries for recent data, thanks to ClickHouse's design.
Your setup is excellent for this type of query, and performance should remain fast even as the table grows.Becouse of these things,
Linear Regression is a good starting point for predicting medical insurance costs. The idea is to model charges as a function of features like age, BMI, number of children, smoking habits, and region.
Steps usually include:
Prepare the data – encode categorical variables (like sex, smoker, region) into numerical values.
Split the data – use train-test split to evaluate the model’s performance.
Train the model – fit Linear Regression on training data.
Evaluate – use metrics like Mean Squared Error (MSE) and R² score to check accuracy.
Predict – use the model to estimate charges for new individuals based on their features.
Keep in mind: Linear Regression works well if the relationship is mostly linear. For more complex patterns, Polynomial Regression or Random Forest can improve predictions.
If you want, I can also share a Python example with dataset and code for better understanding.
It's typically safe without any guarantee.
As mentioned in @axe 's answer.
It's okay if any impl of string stores as a sequential character array, but it's not a standard guarantee.
Just so the info is here.
Instead of arec and aplay
You should use tinycap with tinyalsa on android from what i remember.
Unexpected Git conflicts occur when multiple people make changes to the same lines of a file or when merging branches with overlapping edits. Git can’t automatically decide which change to keep, so manual resolution is needed.
read more;https://www.nike.com/
I guess you need to use double curly paranthesis in your prompt to avoid string manipulation errors. I know the error message doesn't seem to be related to that.
Instead of {a: b} -> {{a: b}}
Azure DevSecOps brings security into every stage of DevOps using a mix of Azure-native and third-party tools:
Code & CI/CD – Azure Repos (secure code management), Azure Pipelines/GitHub Actions (automated build & deploy with security gates).
Security & Compliance – Microsoft Defender for Cloud (threat protection), Azure Policy (enforce standards), Azure Key Vault (secure secrets).
Testing & Vulnerability Scanning – SonarQube, Snyk, OWASP ZAP for code quality and dependency checks.
Monitoring & Response – Azure Monitor & Log Analytics (observability), Microsoft Sentinel (SIEM/SOAR for threat detection & response).
👉 At Cloudairy, we design DevSecOps pipelines that integrate these tools to keep code, infrastructure, and operations secure, compliant, and automated.
look ! this can be more helpful
Along with all other Azure products, Cognitive Services is part of the official collection of Azure architecture symbols that Microsoft provides. It is advised to use these icons in solution and architectural diagrams.
Get Azure Architecture Icons here.
Formats: SVG, PNG, and Visio stencils that work with programs like Lucidchart, Draw.io, PowerPoint, and Visio.
Service categories are used to arrange the icons. Cognitive Services is located in the AI + Machine Learning category.
Microsoft updates and maintains these icons to make sure they match the Azure logo.
Your architecture diagrams will adhere to Microsoft's design guidelines and maintain their visual coherence if you use these official icons.
You can try to clean the Gradle caches to force a fresh download:
flutter clean
rm -rf ~/.gradle/wrapper/dists ~/.gradle/caches android/.gradle
flutter pub get
and then check the wrapper URL:
distributionUrl=https\://services.gradle.org/distributions/gradle-8.7-bin.zip
retry:
flutter run -v
You can also implement it yourself in a Spring Boot 2 application using Spring’s ApplicationEvent and Transaction Synchronization.
You can follow below steps : -
-** Create an outbox table with columns for unique ID, event type, payload, and timestamp to persist events.
- Use a single database transaction to save both business data and the corresponding event to the outbox table.
- Implement a scheduled job to poll the outbox table, send unsent events to their destination, and then mark them as sent or delete them.
- Design event consumers to be idempotent, ensuring they can safely process duplicate messages without side effects.
Mine was solved because I had Platforms in my csproj:
<Platforms>x64;x86</Platforms>
I had to remove it for it to start building correctly.
To retrive the SAP data, you need to create SAP Odata Glue connector first.
Following this guideline to create the Glue connector: https://catalog.us-east-1.prod.workshops.aws/workshops/541dd428-e64a-41da-a9f9-39a7b3ffec17/en-US/lab05-glue-sap
Test the connector to make sure the connection and authentication is succeeded.
Then you need to create Glue ETL Job to read the SAP Odata and write to S3.
(Give the Glue Job's IAM role with proper privileges, like S3 read/write access...)
You can refer to this ETL code:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
# Script for node SAP OData - using correct 'entity' parameter
SAPOData_node = glueContext.create_dynamic_frame.from_options(
connection_type="sapodata",
connection_options={
"connectionName": "Your Sapodata connection",
"ENTITY_NAME": "/sap/opu/odata/sap/API_PRODUCT_SRV/Sample_Product" # Your SAP Odata entity
},
transformation_ctx="SAPOData_node"
)
# Write to S3 destination
output_path = "s3://your-sap-s3-bucket-name/sap-products/"
glueContext.write_dynamic_frame.from_options(
frame=SAPOData_node,
connection_type="s3",
connection_options={
"path": output_path,
"partitionKeys": [] # Add partition keys if needed, e.g., ["ProductType"]
},
format="parquet",
transformation_ctx="S3Output_node"
)
job.commit()
Run the ETL job
It solves my problem this time. I added pyproject.toml file along with setup.py
Content of pyproject.toml
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
It generated .whl file only for that specific package.
Unknow the causes, but the dumpfile shows filesystem::path::~path free before init. It's a bug in Clang 20.1, have been fixed in Clang 21+
could be a bug related to Compiler Reordering feature
Simply, Convert the results to an Eloquent Collection
use Illuminate\Support\Collection; // Import if not already imported
$acc = DB::select('select id,name from accounts limit 5');
return $collection = collect($acc);
It's not an UB
As long as you know what you're doing, it's OK to use anything as long as they can be compiled, that's how the unsafe works
If the UnsafeCell write at very begining of &T reading, it's an UB. If that never happens, then it's safe for using it.
I would like to express my sincere gratitude to @ Christoph Rackwitz for his suggestion. By visiting the website he shared, I obtained useful information. Given that there are very few online tutorials mentioning the use of Nvidia GeForce RTX 50 series graphics cards for compiling Cuda and OpenCV library files at the current date, I am sharing my successful compilation experience here.
The version numbers of the various software and drivers I use are as follows:
OS : Windows 11
Cmake:3.28.0
Nvidia Cuda Version : 13.0
Cuda Toolkit:cuda 12.9
cudnn:9.13
Visual Studio:Microsoft Visual Studio Professional 2022 (x64)- LTSC 17.6,Version:17.6.22
OpenCV/OpenCV-contrib:4.13.0-dev, Make sure to download the latest repository files from the OpenCV's Github website. The source code of version 4.12 of OpenCV cannot fully support the Nvidia Cuda Toolkit, and it will cause many problems.
Python Interpreter:Python 3.13.5, I installed a standalone Python interpreter specifically for compiling the OpenCV library files used in Python programming.
CMake flags:
1.Check "WITH_CUDA", "OPENCV_DNN_CUDA" , OPENCV_DNN_OPENVINO(or OPENCV_DNN_OPENCL/OPENCV_DNN_TFLITE), individually, and do not check "BUILD_opencv_world", Set the path of OPENCV_EXTRA_MODULES_PATH, for example: D:/SoftWare/OpenCV_Cuda/opencv_contrib-4.x/modules;
2.Set the values of CUDA_ARCH_BIN and NVIDIA PTX ARCHs to 12.0, check WITH_CUDNN,
3. Check "OPENCV_ENABLE_NONFREE"; If you want to compile the OpenCV library file used for Python programming, the numpy library needs to be installed in the installation path of the Python interpreter. You also need to set the following several paths, for example:
PYTHON3_EXECUTABLE: D:/SoftWare/Python313/python.exe
PYTHON3_INCLUDE_DIR: D:/SoftWare/Python313/include
PYTHON3_LIBRARY: D:/SoftWare/Python313/libs/python310.lib
PYTHON3_NUMPY_INCLUDE_DIRS: D:/SoftWare/Python313/Lib/site-packages/numpy/_core/include
PYTHON3 PACKAGES PATH: D:/SoftWare/Python313/Lib/site-packages
4.Then check BUILD_opencv_python3 and ENABLE_FAST_MATH
After the configuration is completed, use CMake's "generate" function to create "OpenCV.sln". Open "OpenCV.sln" with Visual Studio, and complete the final compilation process by using "ALL BUILD" and "INSTALL". As long as there are no errors reported by Visual Studio, the OpenCV library files have been compiled successfully.
has this been fixed? I am facing the same issue and not sure what is wrong.
You should use "Union All" when you are joining two sources with the same number of columns and columns of similar nature i.e You want all the records from both the sources.
I couldn't find a way to input empty string through Airflow UI neither. My work around is to input a space and strip the param in code.
Native Image has the restriction that "Properties that change if a bean is created are not supported (for example, @ConditionalOnProperty and .enabled properties)" due to the closed-world assumption.
You forgot to double quote the string:
set(compileOptions "-Wall -Wextra -O3 -Wno-narrowing -ffast-math -march=native")
So, what ends up happening is that compileOptions is set to "-Wall," only, and the other tokens, such as "-Wextra", "-O3", etc, are parsed as options to the set command.
you need to reference $PARAMETER1 instead $@ in the inline script command. These parameters are at the arm level. They will not pass arguments to script
Use the distinct function, like this:
distinct (column_name)
Your code is correct. You are getting an error because of a known bug in Playground.
Please consider using finally block to reset your implicit wait to default value, it's less error prone and avoids code duplication.
I fixed this by not using sudo for my command.
In other for LIME to work correctly and effectively, this will require probability scores rather than simple predictions.
The current setup uses rf.predict, which produces 0/1 labels. For LIME to receive detailed probability distribution, use rf.predict_proba, this will properly explain the predictions.
To solve this, when calling explainer.explain_instance switch to rf.predict_proba. This adjustment will allow LIME to access probability score necessary for its analysis.
After upgrading from "expo": "54.0.7", to "expo": "54.0.8", I was finally able to run eas build -p ios successfully today.
Sir help me code: Sarangheo Autotype javascript..
The solution for me was to switch from path.toShapes(true) to SVGLoader.createShapes(path) when using ExtrudeGeometry for the shapes.
The issue was ultimately the workflow steps and not getting all the session keys properly set. When clicking on Sign In Button, you go to https://auth.pff.com. I tried going directly to https://auth.pff.com. However, when adjusting and going to https://premium.pff.com and clicking on "sign in" button, everything populated correctly. For some reason the Session key for "loggedIn" was not getting set to True otherwise.
I did have to add 1-2 second sleep as well to make sure the Captcha Loaded... no interaction with it, but you just had to let it load.
You can find step by step explaination and you can use custom input for aho corasick algorithm here.
You could do this with randcraft
from randcraft.constructors import make_discrete
bernoulli = make_discrete(values=[0, 1], probabilities=[0.8, 0.2])
bernoulli_100 = bernoulli.multi_sample(100)
bernoulli_100.plot()
results = bernoulli_100.sample_numpy(5)
print(results)
# [10. 15. 20. 14. 24.]
where did u get the Bluetooth sdk for the ACR1255U-J1 from because mine came only with a java sdk which wont work for android?
I found the answer for this
Had to allow this permission for the EKS node IAM role
ecr:BatchImportUpstreamImage
try installing rosetta via softwareupdate --install-rosetta. i had the same issue and when running xcrun simctl list runtimes -v and saw it mentioned a lack of rosetta.
I have been facing the same issue that you have described.
After updating the library "com.google.android.gms:play-services-ad" to version "24.6.0" it got solved.
This version was realesed on September 9th and it is the latest.
I hope it works for you too!
https://mvnrepository.com/artifact/com.google.android.gms/play-services-ads/24.6.0
If the problem is located in a third-party gem instead of your own code, then it might be easier to use Andi Idogawa's file_exists gem, at least temporarily (explanatory blog post).
bundle add file_exist
Then add to e.g. config/boot.rb:
require 'file_exists'
Using an ontology to guide the tool sounds smart like checking everything carefully when you bonuskaart scannen to make sure it works as expected.
In case u’r struggling with calendly and only need an api, check out recal https://github.com/recal-dev. We also open-sourced our scheduling sdk and are integrating a calendly wrapper api rn. If u want early access, just shoot me a message [email protected]
Did you manage to run it?
i have similar problem with H747
mkdir /tmp/podman-run_old
mv -v /tmp/podman-run-* /tmp/podman-run_old/
# start all dead containers
podman start $(podman ps -qa)
I would turn to window functions and perhaps a common table expression such as:
with cte as (select * ,
row_number() over (partition by multiplier,id) as lag_multiplier
from table)
update table set id =concat(cast(cte.id,int),cast(cte.lag_multipier)
where id in (select id from table where multiplier!=0)
from table
join cte using(id);
/*Note that I don't work with UPDATE much, and haven't tested this query. So the syntax might be off. It's also a little expensive. I'm not sure if that can be improved. Best of luck.*/
Habe you Solved this Problem? I Think I have an Similar issue. Br Joachim
This is definitely feasible, but we would need to look at your webhook listener code.
From the Docusign part, please refer to this documentation on how to use and setup Connect notifications.
https://developers.docusign.com/platform/webhooks/
Thank you.
I popped by here when researching the 255 Transpose Limit, as I expect others have and may. I got a bit thrown of course, but finally straightened it out in my brain, and so thought I could make a worthwhile contribution for others passing in the future.
There are two issues here, which may not be immediate obvious.
_1) The Transpose function does not like it if it is working on a Variant element type array, where one or more of the array elements are a string of more than 255 characters.
If we are dealing with 1 dimensional arrays, as in the original question, then there is a way to get over this without looping, and still using the Transpose function: Use the Join Function on the Variant array (with arbitrary separator), then use Split function on that. We then end up with a String array, and the Transpose is happy with any elements with more than 255 characters in them
This next demo coding almost gets what was wanted here, and variations of it may be sufficient for some people having an issue with the 255 Transpose Limit.
Sub RetVariantArrayToRange() '
Let ActiveSheet.Range("M2:M5") = TransposeStringsOver255()
End Sub
Function TransposeStringsOver255()
Dim myArray(3) As Variant 'this the variant array I will attempt to write
' Here I fill each element with more than 255 characters
myArray(0) = String(300, "a")
myArray(1) = String(300, "b")
myArray(2) = String(300, "c")
myArray(3) = String(300, "d") '
' Let TransposeStringsOver255 = Application.Transpose(myArray()) ' Errors because Transpose does not work on a Variant type array if any element is a string greater than 255 characters
Dim strTemp As String, myArrayStr() As String
Let strTemp = Join(myArray(), "|")
Let myArrayStr() = Split(strTemp, "|")
Let TransposeStringsOver255 = Application.Transpose(myArrayStr())
End Function
_2) That last coding does not do exactly what was wanted. The specific requirement was along these lines, (if using the function above) :
…..select an area of 4 rows x 1 column and type "=TransposeStringsOver255()" into the formula bar (do not enter the quotes). and hit (control + shift + enter)…..
That last coding does not work to do exactly that.
As Tim Williams pointed out, the final array seems to need to be a String array (even if being held in a Variant variable ). Why that should be is a mystery, since the demo coding above seems to work as a workaround to Transpose Strings Over 255 in a Variant Array To a Range.
To get over the problem, we loop the array elements into a String array. Then the mysterious problem goes away.
This next coding would be the last coding with that additional bit
Function TransposeStringsOver255VariantArrayToSelectedRange()
Dim myArray(3) As Variant 'this the variant array I will attempt to write
' Here I fill each element with more than 255 characters
myArray(0) = String(300, "a")
myArray(1) = String(300, "b")
myArray(2) = String(300, "c")
myArray(3) = String(300, "d") ' -
' Let TransposeStringsOver255VariantArrayToSelectedRange = Application.Transpose(myArray()) ' Errors because Transpose does not work on a Variant type array if any element is a string greater than 255 characterts
Dim strTemp As String, myArrayStr() As String
Let strTemp = Join(myArray(), "|")
Let myArrayStr() = Split(strTemp, "|")
' Let TransposeStringsOver255VariantArrayToSelectedRange = Application.Transpose(myArrayStr()) ' Errors because "Seems like you need to return a string array" Tim Williams: https://stackoverflow.com/a/35399740/4031841
Dim VarRet() As Variant
Let VarRet() = Application.Transpose(myArrayStr())
Dim strRet() As String, Rw As Long
ReDim strRet(1 To UBound(VarRet(), 1), 1 To 1)
For Rw = 1 To UBound(VarRet(), 1)
Let strRet(Rw, 1) = VarRet(Rw, 1)
Next Rw
Let TransposeStringsOver255VariantArrayToSelectedRange = strRet()
End Function
To compare in the watch window:
The first coding ends up getting this array, which in many situations will get the job done for you
https://i.postimg.cc/fWYQvsTy/c-Transpose-Strings-Over255.jpg
But for the exact requirement of this Thread, we need what the second coding gives us, which is this:
https://i.postimg.cc/FRL585yP/f-Transpose-Strings-Over255-Variant-Array-To-Selected-Range.jpg
_.______________________________________-
Since we are now having to loop through each element, then we might just as well forget about the Transpose function , and change the loop slightly to do the transpose at the same time
Function TransposeStringsOver255VariantArrayToSelectedRange2()
Dim myArray(3) As Variant 'this the variant array I will attempt to write
' Here I fill each element with more than 255 characters
myArray(0) = String(300, "a")
myArray(1) = String(300, "b")
myArray(2) = String(300, "c")
myArray(3) = String(300, "d") ' -
Dim strRet() As String, Rw As Long
ReDim strRet(1 To UBound(myArray()) + 1, 1 To 1)
For Rw = 1 To UBound(myArray()) + 1
Let strRet(Rw, 1) = myArray(Rw - 1)
Next Rw
Let TransposeStringsOver255VariantArrayToSelectedRange2 = strRet()
End Function
We have now arrived at a solution similar to that from Tim Williams.
(One thing that initially threw me off a bit, was the second function from Tim Williams, as some smart people told me that to get an array out of a function, then it must be
Function MyFunc() As Variant
I never saw a function like
Function MyFunc() As String()
Before)
Hoping this bit of clarification may help some people passing as I did
Alan
Not an answer but an extension of the question.
If I want to copy the contents of say File1 to a new File2 while only being able to have one file open at a time in SD.
It seems that I can open File1 and read to a buffer until say a line end, and then close File1, open File2 and write to File2. Close File2 and reopen File1.
Then I have a problem, having reopened File1 I need to read from where I had got to when I last closed it. Read the next until say line end, close File1, reopen File2 as append and write to File2.
The append means that File 2 gradually accumulates the information so no problem but I am unclear as to how in File1 I return to the last read location.
Do I need to loop through the file each time I open it for the number of, until line end, reads previously done?
This thread looks too old but I came across to similar issue.
I am trying to copy millions of files from 1 server to another over network.
When I use the robocopy code without /mt, it looks working fine. But when I add /mt, /mt:2 etc. it stuck on same screen as above. Ram usage increasing. I have waited 20 minutes but nothing happened. It just copied the folders but not the files inside. This happens in win server 2016.
Anyone may suggest something ?
To target a specific file size (worked for jpeg), say 300kb:
convert input.jpg -define jpeg:extent=300kb output.jpg
Forces output file to be about 300 KB
It seems the issue was within Flutter's code and my IDE was trying to debug it.
My VS Code debugging configuration was set to "Debug my code + packages" so it was also trying to debug Flutter's code and that's why it would open up binding.dart because there was an error in that code.
Setting debugging config to just "Debug my code" should fix this problem!
You can do this from the bottom left in VS Code, just next to the error count and warning counts.
Edit: You can only change this when you're running a debug session. Launch a debug instance and the toggle to change this should appear in the bottom left corner.
Kafka is a steam not a format.
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "sparktest") \
.option("startingOffsets", "earliest") \
.load()
It's Python 3.12 issue, try downgrading to 3.11
In your nuxt.config.ts do:
// https://nuxt.com/docs/api/configuration/nuxt-config
export default defineNuxtConfig({
$production: {
nitro: {
preset: 'aws-amplify',
awsAmplify: {
runtime: "nodejs22.x"
},
},
},
});
I know it is old thread but I still faced this issue on windows and finally get working solution after multiple attempts
$OutputEncoding = [System.Text.Encoding]::UTF8
[System.Console]::OutputEncoding = [System.Text.Encoding]::UTF8
python script.py > output.txt
I once had to change my CER file from "UTF-16 LE BOM" to "UTF-8". Im not sure how this applies to you directly, but thats basically the error i got from openssl when working with certificates with the wrong text encoding.
I once had to change my CER file from "UTF-16 LE BOM" to "UTF-8". Im not sure how this applies to you directly, but thats basically the error i got from openssl when working with certificates with the wrong text encoding.
I also faced the same issue for so many years. and nothing found on the internet. But after so a long time, I finally got a solution for this. I found a little but excellent working add-on for this, the link I am giving below.
It's very easy, we have to just install the addon, then copy Excel data from Excel, then go to the Thunderbird compose window, and just press key combination CTRL + Q, and you are done.
No need for MS Word or any other kind of word processor. you data will be pasted as it is with rich text formating with colors also.
https://addons.thunderbird.net/en-US/thunderbird/addon/paste-excel-table-into-compose/
In 2025 I just renamed C:\project\.git\hooks\pre-commit.sample to pre-commit
#!/bin/sh
echo "🚀 Run tests..."
php artisan test
if [ $? -ne 0 ]; then
echo "❌ Test failed!"
exit 1
fi
echo "✅ Passed, pushing..."
exit 0
I believe this has something to do with virtualization but I don't fully understand what's going on, why is this and how do I fix it.
Virtualization is simple: If you have 10000 strings, the UI will only create however many ListViewItem controls are needed to fit the viewport.
When you set CanContentScroll to false, the ScrollViewer will "scroll in terms of physical units", according to the documentation. That means that all 10000 ListViewItems will be created, lagging the UI.
Is there a way to keep it False so it won't show an "empty line" at the end?
By keeping it false, you kill performance. If you want to get rid of the empty line at the bottom and eliminate lag, you should override ListView's VirtualizingStackPanel in order to override it's behavior.
<ListView ScrollViewer.CanContentScroll="True">
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<VirtualizingStackPanel ScrollUnit="Pixel"
IsVirtualizing="True"/>
</ItemsPanelTemplate>
</ListView.ItemsPanel>
</ListView>
ScrollUnit="Pixel" makes the ScrollUnit be measured in terms of pixels, which should eliminate the empty line at the bottom.
Same problem with blazor server. The nuget package BootstrapBlazor bundles the necessary bootstrap files in the staticwebassets folder so it should be properly deployed for blazor - and you can reference it as such:
<link href="_content/BootstrapBlazor/css/bootstrap.min.css" rel="stylesheet" />
M facing the same issue while upgrading mu node app to node 18, and using serverless component3.6 nextjs 14 . Tried many ways didnt find any
This is such a non isssue, just get better.
public static bool IsNegative(this TimeSpan value) => value.Ticks < 0;
The yml:
- name: RUN PYTHON ON TARGET
changed_when: false
shell: python3 /.../try_python.py {{side_a}}
become: true
become_user: xxxx
register: py_output
The script (adapted to AAP and tested locally):
# name = input()
with open("/.../try_txt.txt", "w") as file:
file.write(f"{{$1}}")
The survey contains only the "side_a" variable, and it is working already for bash cases.
Since this question is a bit old and doesn't seem to have a clear answer, here is my proposed approach.
First, I would segment the large dataset into smaller, more manageable chunks based on a time window (for example, creating a separate DataFrame for each month). For each chunk, I would perform exploratory data analysis (EDA) to understand its distribution, using tools like histograms, Shapiro-Wilk/Kolmogorov-Smirnov tests for normality, and QQ-Plots.
In a real-world scenario with high-frequency data, such as a sensor recording at 100 Hz (i.e., one reading every 0.01 seconds), processing the entire dataset at once is impossible if you're working on a local machine. Therefore, I would take a representative sample of the data. I would conduct the EDA on this sample, then calculate the normalization parameters from it. These parameters would then be used as the basis to normalize the rest of the data for that period (e.g., the entire month).
By normalizing the data to a consistent range, such as [0,1], the different segments of data should become directly comparable.
The documentation is contradictory about what is the difference between volatile keyword and VH.setVolatile
I don't remember the chapter... but the one for VarHandle explicitly states that it resembled a fullFence... which means that at least both setVolatile and getVolatile are ofseq_cst barrier.
Now, I have my doubts that the keyword version is as strong.
The reason they are so obtuse about it is that within chapter 17 they attempt to try to explain both... the lock monitor and the volatile read/writes as if they were similar.
Chapter 17 treats the concept of "Synchronization order" out of nowhere.
It doesn't explain WHAT enforces it or how it even works under the hood.
I know by experience that the keyword is a lock-queue... so it being "totally ordered" is not true for MCS/CLH lock-queues which could very well work perfectly fine with both acquire and release semantics.
But anyways...
Chapter 17.4.3 makes a subtle distinction in my mind...
It states:
"A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)"
Notice the property "synchronization order" is not explicitly granted to the "write to a volatile variable v" action/subject.
This means that the "total order property" that was previously granted to the "synchronization order" concept... is not the same as a volatile read/write as in the paragraph prior, in Chapter 17.4.2 it was implied that both where "synchronization actions"... not order.
17.4.2. Actions
An inter-thread action is an action performed by one thread that can be detected or directly influenced by another thread. There are several kinds of inter-thread action that a program may perform:
Read (normal, or non-volatile). Reading a variable.
Write (normal, or non-volatile). Writing a variable.
Synchronization actions, which are:
Volatile read. A volatile read of a variable.
Volatile write. A volatile write of a variable.
Then, in the next chapter, the "total order" property is given to the concept of "synchronization order"... but not actions.
17.4.3. Programs and Program Order
Among all the inter-thread actions performed by each thread t, the program order of t is a total order that reflects the order in which these actions would be performed according to the intra-thread semantics of t.
Which makes me guess... that what they are trying to talk about in this paragraph is about the synchronize keyword... aka the monitor/CLH queue.
In which case... YES... it behaves as a seq_cst barrier no doubt about that...
Now... going back to the first quote:
"A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order)"
The fact that the documentation uses the word "variable v" implies a monotonic-base sequencing defined by a "per-address sequential consistency", which... as far as I understand... is the BASE Program Order sequencing respected by ALL memory model/processors (bare metal) ... no matter how weak or strong they are.
And if any JIT or compiler disobeys this principle... then I recommend no one should be using that implementation anyways...
Based on the phrase "all subsequent reads of v" strongly implies that the barrier is anchored by the dependency chain of the address v (monotonic dependency chain).
Hence this is explicitly defined as a release since nonrelated ops on other addresses that are not v... are still allowed to be reordered before the release.
(To me) the usage of the word "v" is the hint that the volatile keyword is an acquire/release barrier.
If not... then the documentation needs to provide more explicit wording.
But this is not just a Java issue... even within the Linux Kernel... the concept of barriers/ fences and synchronization gets mixed up... so I don't blame them.
dude, more than 5 years after and you've helped me solve my problem. Thank you very much!!!, be blessed!
The command used for broadcasting was wrong.
The correct command is:
am broadcast -n com.ishacker.android.cmdreceiver/.CmdReceiver --es Cmd "whoami"
The -n flag specifies the component name explicitly. Without it, the broadcast may not be delivered correctly to the receiver, and trying to get extras with intent.getStringExtra() will result in it returning null.
Thanks @Maveňツ for posting the suggestion in the comments.
It's been a few years since the question was asked, but since no good answer emerged, here's how I do it:
I use git's global config to store remote config blocks with fetch and push URLs, fetch and push refspecs, custom branch.<name>.remote routes, merge settings, etc.
The global config contains a config file per project, which gets included into $HOME/.gitconfig conditionally using [Include] and [IncludeIf] blocks.
[includeIf "gitdir:ia2/website/.git"]
path=ia2/website.config
[includeIf "onbranch:cf/"]
path=cloudflare-tests.config
In this example, the file $HOME/.gitconfigs/ia2/website.config is automatically included when I work on files in the $HOME/proj/ia2/website directory, which is the website for the ia2 project.
Also, in any project, I can create a branch named "cf/..." which causes the cloudflare-tests.config file to be included in git's configuration, which routes that branch to a repo I have connected to Cloudflare Pages. This allows any of my project to be pushed to a Cloudflare Pages site by simply creating an appropriate "cf/" branch in that project.
The local config (ie, the .git/config file present in each clone) doesn't contain any repo configuration, other than things that accidentally end up there. Any settings I want to keep and duplicate on other machines are moved from the local .git/config to the global $HOME/.gitconfigs/$PROJECT.config file.
Since all configs for all my projects live under the same $HOME/.gitconfigs directory, this directory is itself a git repository, which I push to github, and fetch on all machines where I need it.
I have a repository named .gitconfigs at github, and I clone this in the $HOME directory of every machine I develop on.
Each one of the projects I'm working on has its corresponding $project.config file maintained in a branch with the same name as the project, and there are some config files that are included in all projects, like the cloudflare example I gave above.
The scheme is capable of maintaining a mix of private and public projects. Configs for public projects is pushed to my public .gitconfigs repo, and the private projects get pushed elsewhere. In a company setting, your devteam might maintain a .gitconfigs private repo for.
You're welcome to inspect or fork my .gitconfigs repo at https://github.com/drok/.gitconfigs - give me a click-up if this helps you, and I welcome pull-requests. I currently have public configs for curl, git, transmission, gdb and internet archive. One benefit of sending a PR is that I can give you feedback on the whatever project you're adding. I've been using this technique for a year with huge time savings results. No more losing project-specific repo settings for me.
Why you use Breeze with Backpack?! Backpack have authorization from box. You must remove Breeeze - not needed!
I faced this problem in wsl2.
Check the permission:
ls -l /var/run/docker.sock
Correct the permission:
sudo chgrp docker /var/run/docker.sock;
sudo chmod 660 /var/run/docker.sock;
And reset to factory default the docker.
Then, In Powershell:
wsl --shutdown
After doing this you can see
docker ps
I just finally got this to work. I had tried all the documentation that you reference without success. This time around I used the PowerShell script included in this Snowflake quick start to setup the Oauth resource and client app.
https://quickstarts.snowflake.com/guide/power_apps_snowflake/index.html?index=..%2F..index#2
After using the PowerShell script to setup the enterprise apps I was still getting the bad gateway error. In my case it turns out that Power Automate was successfully connecting to Snowflake but was failing to run this connection test.
USE ROLE "MYROLE";
USE WAREHOUSE "COMPUTE_WH";
USE DATABASE "SNOWFLAKE_LEARNING_DB";
USE SCHEMA "PUBLIC";SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'PUBLIC'
-- PowerPlatform-Snowflake-Connector v2.2.0 - GET testconnection - GetInformationSchemaValidation
;
I had created a Snowflake trail account to test the Oauth connection and in that account the COMPUTE_WH warehouse was suspended. As a result the test connection query was failing. After discovering that Power Automate was successfully connecting to Snowflake I just do proper setup on the Snowflake side to get the query to run (create running warehouse, database, schema, table all usable by specified user and role).
Here are somethings to check:
If you have access to Entra ID check the sign-in logs under the service principal sign-ins tab. Verify your sign-in shows success.
In Snowflake check the sign-in logs for the user you created.
SELECT * FROM TABLE(information_schema.login_history()) WHERE user_name = '<Your User>' ORDER BY event_timestamp DESC;
Verify that you created user has default role, warehouse and name space specified.
If Power Automate was able to login check the query history for your user and see if/why the connection test query failed.
If Power Automate is successful in connecting to Snowflake but failing to run the connection test query you could try Preview version of Power Automate Add Connection window. I see it has a check box you can skip the connection test.
As of 2012, WS-SOAPAssertions is a W3C Recommendation. It provides a standardized WS-Policy assertion to indicate what version(s) of SOAP is supported.
For details on how to embed and reference a policy inside a WSDL document, refer to WS-PolicyAttachment.
Images and Icons for Visual Studio
Nuxt does not have a memory leak but Vue 3.5 is known to have one. It should be resolved when Vue 3.6 is released, or possibly you can pin to Vue 3.5.13 (see https://github.com/nuxt/nuxt/issues/32240).
Dot product is computationally faster for unit vectors since cosine similarity of unit vectors equals their dot product, but Elasticsearch can optimize the calculation. For unit vectors: cosine(A,B) = dot(A,B) since ||A|| = ||B|| = 1.
{
"mappings": {
"properties": {
"vector_field": {
"type": "dense_vector",
"dims": 384, // your vector dimensions
"similarity": "dot_product"
}
}
}
}
Your approach can cause high memory usage with large integers, as it creates a sparse array filled with undefined values. The filter step also adds unnecessary overhead. For large datasets, it's inefficient compared to JavaScript's built-in .sort() or algorithms like Counting Sort or Radix Sort for specialized cases. Stick with .sort() for practicality and performance.
Based on your setup, the inconsistency on the latency that you're experiencing possibly points toward a routing or proxy behavior difference between the external Application Load Balancer and the Classic version, rather than just a misconfiguration on your end. Though both load balancers function in Premium Tier and utilizes Google's global backbone for low-latency anycast routing through GFEs, their internal architecture are not exactly the same. For an instance, your External Load Balancer's Envoy layer with its dynamic default load balancing algorithm may re-route using alternative GFEs during intercontinental hops (for example, your test of Asia to Europe) when minor congestion occurs, which explains the 260ms-1000ms fluctuations. Meanwhile, the Classic Load Balancer sticks to a simpler, single-optimized path, minimizing fluctuations thus the consistent RTT from Seoul to europe-west2.
It might also be worth getting Google Cloud Support with all your findings to identify if this is related to a larger network problem or internal routing issue.
Your POST became a GET because of an unhandled HTTP redirect.
Your GKE ingress redirected your insecure http:// request to the secure https:// URL. Following this redirect, your requests client automatically changed the method from POST to GET, which is standard, expected web behavior.
You may try to fix the API_URL in your Cloud Run environment variable to use https:// from the start. This prevents the redirect and ensures your POST arrives as intended.
To reliably trace this, inspect the response.history attribute in your Cloud Run client code. This will show the exact redirect that occurred.
My polyfills got dropped when I upgraded angular and they needed to get re-added to angular.json (specifically, it was the angular localize line)
"polyfills": [
"zone.js",
"@angular/localize/init"
],