Unfortunately, NO.
Docker daemon is limited in this regard. It uses DNS services and static host files of the host machine.
Docs: https://docs.docker.com/engine/network/#embedded-dns-server
To work around this issue folks recommend running another container with a proper DNS server (such as ) and configure Docker to use it: https://serverfault.com/questions/612075/how-to-configure-custom-dns-server-with-docker
I did it this way: https://github.com/generate94/convert_dll_to_lib/blob/main/README.md (includes an executable along w the source code)
basically:
Extract Exports – Run dumpbin /EXPORTS on the DLL to list exports.
Create .def – Write LIBRARY <DLL_NAME> and EXPORTS header.
Generate .lib – Use lib.exe /DEF:<def file> /OUT:<lib file>
What is the current directory? Very likely the path in which the executable resides.
std::string in C++ Without Changing Its CapacityWhen working with std::string in C++, resizing it is straightforward, but ensuring its capacity (the allocated memory size) stays the same can be confusing. Let’s break down how to resize a string while preserving its capacity—no jargon, just clarity.
size() and capacity()?size(): The number of characters currently in the string.
capacity(): The total memory allocated to the string (always ≥ size()).
For example:
cpp
Copy
std::string str = "Hello";
std::cout << str.size(); // Output: 5
std::cout << str.capacity(); // Output: 15 (varies by compiler)
The resize() method changes the size() of the string. However:
If you increase the size beyond the current capacity(), the string reallocates memory, increasing capacity.
If you decrease the size, the capacity() usually stays the same (no memory is freed by default).
cpp
Copy
std::string str = "Hello";
str.reserve(20); // Force capacity to 20
str.resize(10); // Size becomes 10, capacity remains 20
str.resize(25); // Size 25, capacity increases (now ≥25)
To avoid changing the capacity, ensure the new size() does not exceed the current capacity():
cpp
Copy
size_t current_cap = str.capacity();
cpp
Copy
str.resize(new_size); // Only works if new_size ≤ current_cap
cpp
Copy
#include <iostream>
#include <string>
int main() {
std::string str = "C++ is fun!";
str.reserve(50); // Set capacity to 50
std::cout << "Original capacity: " << str.capacity() << "\n"; // 50
// Resize to 20 (within capacity)
str.resize(20);
std::cout << "New size: " << str.size() << "\n"; // 20
std::cout << "Capacity remains: " << str.capacity(); // 50
return 0;
}
From what I see, the snapshot from your StreamBuilder is not in use, you might as well remove the StreamBuilder.
Anytime you setState, your StreamBuilder rebuilds which might cause all the functions in there to get called multiple times and cause an infinite loop.
If you suck at CLIs just do the following:
Go to your project's .git folder
Then go to "lost-found" subfolder.
You will see a lot blobs listed with hash. These are the files git preserved it. If you're lucky, you'll find the lost files here.
Simply open it in a text editor and see if they're recent enough and save it.
this might help:
foreach ($_SERVER as $parm => $value) echo "<BR>$parm = '$value'<BR>";
Any answer? I have the same issue.
BR
Paco
réponse un peu tardive mais j'ai moi même eu du mal à trouver l'info puis en voyant ton post ça ma orienté sur une autre piste.
# files
[System.IO.FileInfo[]]$files = $([System.IO.Directory]::EnumerateFiles($PWD,"*.*",[System.IO.SearchOption]::AllDirectories)) | %{ [System.IO.FileInfo]$_ }
# custom properties
$files | select BaseName,Name,FullName,Length | Export-Csv -Path $PWD\filelist.csv -Delimiter ';' -NoTypeInformation -Encoding UTF8
# files - splat
$files_ps = @{ Property = @( "BaseName","Name","FullName","Length" ) }
$files = $([System.IO.Directory]::EnumerateFiles($PWD,"*.*",[System.IO.SearchOption]::AllDirectories)) | %{ [System.IO.FileInfo]$_ | select @files_ps }
$files | Export-Csv -Path $PWD\filelist.csv -Delimiter ';' -NoTypeInformation -Encoding UTF8
# directory
[System.IO.DirectoryInfo[]]$directory = [System.IO.Directory]::EnumerateDirectories($PWD,"*",[System.IO.SearchOption]::AllDirectories)
Error is of missing namespace in the flutter_secure_storage package. To fix this, you’ll need to update the package to a version where the namespace is added.
Just make sure to replace flutter_secure_storage: ^9.2.4 in your pubspec.yaml file and using latest flutter SDK version. This simple change should resolve your issue.
nice, but
➜ ~ sudo apt install linux-firmware
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package linux-firmware
Upgrade the tflite package compileSdkVersion to 34
I ended up to this:
It's probably bad approach (because even if I'm an admin I need to wait for additional request to be finished) but it works. There is a mention about server side rendering in the comments but I didn't have enough time to invest into researching & implementing this even though it's probably the way to go.
You cannot set the value of a property if it is null or undefined (as indicated by the ?.)
You should check that the property exists first and then set it:
if (statusUpdateRefreshReasonRef.current) {
statusUpdateRefreshReasonRef.current.value = cloneStatusUpdateClone;
}
Took a long time to figure out.
NotificationCenter.default.addObserver(self, selector: #selector(updateGroupxx(notification:)), name: .NSCalendarDayChanged, object: nil)
$b = '1';
$a = 'b';
$b = '2';
print "$$a";
Output: 2 (current value of $b).
The problem is that the executable file suffix is .exe, while running on Linux. A quick and easy fix is to simply change the suffix to .bin, for example:
exe = executable(
'main.bin', # Do not use .exe here
'main.cu',
link_args: '-fopenmp',
cuda_args: '-Xcompiler=-fopenmp',
)
test('simple_run', exe)
Should run perfectly well. The decision to try running an .exe file with mono while on Linux, comes from this exact line in meson.
you can add multiple --add flags and pass the metadata params like that;
stripe trigger checkout.session.completed \
--add checkout_session:metadata.plan="id" \
--add checkout_session:metadata.user="id"
# Import numpy since it was missing earlier
import numpy as np
# Re-run the log transformation and regression
# Convert 'hourpay' to numeric, forcing errors to NaN
df['hourpay'] = pd.to_numeric(df['hourpay'], errors='coerce')
# Step 1 (revised): Drop missing, zero or negative wages
df = df[df['hourpay'] > 0]
# Create log(wage)
df['log_wage'] = np.log(df['hourpay'])
# Step 2: Motherhood dummy: 1 if has dependent child under 19, 0 otherwise
df['motherhood'] = df['fdpch19'].apply(lambda x: 1 if x > 0 else 0)
# Step 3: Convert categorical variables
df['education'] = df['degcls7'].astype('category')
df['occupation'] = df['occup_group'].astype('category')
df['worktype'] = df['ftpt'].astype('category')
# Step 4: Experience approximation (proxy by age)
df['experience'] = df['age']
# Step 5: Regression formula
formula = 'log_wage ~ motherhood + C(education) + experience + C(occupation) + C(worktype)'
# Step 6: Run OLS regression with robust standard errors
model = smf.ols(formula, data=df).fit(cov_type='HC1')
# Display regression results
model.summary()
I believe what you're looking for is:
HttpResponseMessage response = await httpClient.GetAsync(uri);
if (response.Headers.Location == new Uri(some_uri_string)) {
return true;
}
I realize this response is more than a decade late, but I'd thought I'd answer it.
I am not strong in Excel. I have never used script or record. I have a site map with typed numbers in the cells but I want them to have colons because they are MAC address's. I have tried alot of these ideas but it cuts out numbers/letters. If i make a whole copy of just one large set I have got it to work ish. I copy it and erase my old cell info to paste in the new stuff but both disappear when I do it? Any ideas for a learner of this?
Raka Putra how did you fix the error
import torch
import onnxruntime_extensions
import onnx
import onnxruntime as ort
import numpy as np
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import subprocess
model_name = "spital/gpt2-small-czech-cs"
model = GPT2LMHeadModel.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
input_text = "Téma: Umělá inteligence v moderní společnosti."
# Export the tokenizers to ONNX using gen_processing_models
onnx_tokenizer_coder_path = "results/v5/model/tokenizer_coder.onnx"
onnx_tokenizer_decoder_path = "results/v5/model/tokenizer_decoder.onnx"
# Generate the tokenizers ONNX model
gen_tokenizer_coder_onnx_model = onnxruntime_extensions.gen_processing_models(tokenizer, pre_kwargs={})[0]
gen_tokenizer_decoder_onnx_model = onnxruntime_extensions.gen_processing_models(tokenizer, post_kwargs={})[1]
# Save the tokenizers ONNX model
with open(onnx_tokenizer_coder_path, "wb") as f:
f.write(gen_tokenizer_coder_onnx_model.SerializeToString())
with open(onnx_tokenizer_decoder_path, "wb") as f:
f.write(gen_tokenizer_decoder_onnx_model.SerializeToString())
# Export the Huggingface model to ONNX
onnx_model_path = "results/v5/model/"
# Export the model to ONNX
command = [
"optimum-cli", "export", "onnx",
"-m", model_name,
"--opset", "18",
"--monolith",
"--task", "text-generation",
onnx_model_path
]
subprocess.run(command, check=True)
# Adding position_ids for tokenizer coder for model
add_tokenizer_coder_onnx_model = onnx.load(onnx_tokenizer_coder_path)
shape_node = onnx.helper.make_node(
"Shape",
inputs=["input_ids"],
outputs=["input_shape"]
)
gather_node = onnx.helper.make_node(
"Gather",
inputs=["input_shape", "one"],
outputs=["sequence_length"],
axis=0
)
cast_node = onnx.helper.make_node(
"Cast",
inputs=["sequence_length"],
outputs=["sequence_length_int"],
to=onnx.TensorProto.INT64
)
# Creating position_ids node for tokenizer coder for model
position_ids_node = onnx.helper.make_node(
"Range",
inputs=["zero", "sequence_length_int", "one"],
outputs=["shorter_position_ids"]
)
zero_const = onnx.helper.make_tensor("zero", onnx.TensorProto.INT64, [1], [0])
one_const = onnx.helper.make_tensor("one", onnx.TensorProto.INT64, [1], [1])
position_ids_output = onnx.helper.make_tensor_value_info(
"position_ids",
onnx.TensorProto.INT64,
["sequence_length"]
)
unsqueeze_axes = onnx.helper.make_tensor(
"unsqueeze_axes",
onnx.TensorProto.INT64,
dims=[1],
vals=[0]
)
expand_node = onnx.helper.make_node(
"Unsqueeze",
inputs=["shorter_position_ids", "unsqueeze_axes"],
outputs=["position_ids"]
)
expanded_position_ids_output = onnx.helper.make_tensor_value_info(
"position_ids",
onnx.TensorProto.INT64,
["batch_size", "sequence_length"]
)
# Adding position_ids to outputs of tokenizer coder for model
add_tokenizer_coder_onnx_model.graph.node.extend([shape_node, gather_node, cast_node, position_ids_node, expand_node])
add_tokenizer_coder_onnx_model.graph.output.append(expanded_position_ids_output)
add_tokenizer_coder_onnx_model.graph.initializer.extend([zero_const, one_const, unsqueeze_axes])
# Export tokenizer coder with position_ids for model
onnx.save(add_tokenizer_coder_onnx_model, onnx_tokenizer_coder_path)
# Adding operation ArgMax node to transfer logits -> ids
onnx_argmax_model_path = "results/v5/model/argmax.onnx"
ArgMax_node = onnx.helper.make_node(
"ArgMax",
inputs=["logits"],
outputs=["ids"],
axis=-1,
keepdims=0
)
# Creating ArgMax graph
ArgMax_graph = onnx.helper.make_graph(
[ArgMax_node],
"ArgMaxGraph",
[onnx.helper.make_tensor_value_info("logits", onnx.TensorProto.FLOAT, ["batch_size", "sequence_length", "vocab_size"])],
[onnx.helper.make_tensor_value_info("ids", onnx.TensorProto.INT64, ["batch_size", "sequence_length"])]
)
# Creating ArgMax ONNX model
gen_ArgMax_onnx_model = onnx.helper.make_model(ArgMax_graph)
# Exporting ArgMax ONNX model
onnx.save(gen_ArgMax_onnx_model, onnx_argmax_model_path)
# Adding shape for Tokenizer decoder outputs (Assuming shape with batch_size and sequence_length)
add_tokenizer_decoder_onnx_model = onnx.load(onnx_tokenizer_decoder_path)
expanded_shape = onnx.helper.make_tensor_value_info(
"str",
onnx.TensorProto.STRING,
["batch_size", "sequence_length"]
)
# Adding shape to Tokenizer decoder outputs
output_tensor = add_tokenizer_decoder_onnx_model.graph.output[0]
output_tensor.type.tensor_type.shape.dim.clear()
output_tensor.type.tensor_type.shape.dim.extend(expanded_shape.type.tensor_type.shape.dim)
# Exporting Tokenizer decoder with shape ONNX model
onnx.save(add_tokenizer_decoder_onnx_model, onnx_tokenizer_decoder_path)
# Test Tokenizer coder, Model, ArgMax, Tokenizer decoder using an Inference session with ONNX Runtime Extensions before merging
# Test the tokenizers ONNX model
# Initialize ONNX Runtime SessionOptions and load custom ops library
sess_options = ort.SessionOptions()
sess_options.register_custom_ops_library(onnxruntime_extensions.get_library_path())
# Initialize ONNX Runtime Inference session with Extensions
coder = ort.InferenceSession(onnx_tokenizer_coder_path, sess_options=sess_options, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
model = ort.InferenceSession(onnx_model_path + "model.onnx", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
ArgMax = ort.InferenceSession(onnx_argmax_model_path, sess_options=sess_options, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
decoder = ort.InferenceSession(onnx_tokenizer_decoder_path, sess_options=sess_options, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
# Prepare dummy input text
input_feed = {"input_text": np.asarray([input_text])} # Assuming "input_text" is the input expected by the tokenizers
# Run the tokenizer coder
tokenized = coder.run(None, input_feed)
print("Tokenized:", tokenized)
# Run the model
model_output = model.run(None, {"input_ids": tokenized[0], "attention_mask": tokenized[1], "position_ids": tokenized[2]})
print("Model output (logits):", model_output[0])
# Run the ArgMax
argmax_output = ArgMax.run(None, {"logits": model_output[0]})
print("ArgMax output (token ids):", argmax_output[0])
# Run the tokenizer decoder
detokenized = decoder.run(None, input_feed={"ids": argmax_output[0]})
print("Detokenized:", detokenized)
# Merge the tokenizer and model ONNX files into one
onnx_combined_model_path = "results/v5/model/combined_model_tokenizer.onnx"
# Load the tokenizers and model ONNX files
tokenizer_coder_onnx_model = onnx.load(onnx_tokenizer_coder_path)
model_onnx_model = onnx.load(onnx_model_path + "model.onnx")
ArgMax_onnx_model = onnx.load(onnx_argmax_model_path)
tokenizer_decoder_onnx_model = onnx.load(onnx_tokenizer_decoder_path)
# Inspect the ONNX models to find the correct input/output names
print("\nTokenizer coder Model Inputs:", [node.name for node in tokenizer_coder_onnx_model.graph.input])
print("Tokenizer coder Model Outputs:", [node.name for node in tokenizer_coder_onnx_model.graph.output])
print("Tokenizer coder Model Shape:", [node.type.tensor_type.shape for node in tokenizer_coder_onnx_model.graph.output])
print("Tokenizer coder Model Type:", [node.type.tensor_type.elem_type for node in tokenizer_coder_onnx_model.graph.output])
print("\nModel Inputs:", [node.name for node in model_onnx_model.graph.input])
print("Model Outputs:", [node.name for node in model_onnx_model.graph.output])
print("Model Shape:", [node.type.tensor_type.shape for node in model_onnx_model.graph.output])
print("Model Type:", [node.type.tensor_type.elem_type for node in model_onnx_model.graph.output])
print("\nArgMax Inputs:", [node.name for node in ArgMax_onnx_model.graph.input])
print("ArgMax Outputs:", [node.name for node in ArgMax_onnx_model.graph.output])
print("ArgMax Shape:", [node.type.tensor_type.shape for node in ArgMax_onnx_model.graph.output])
print("ArgMax Type:", [node.type.tensor_type.elem_type for node in ArgMax_onnx_model.graph.output])
print("\nTokenizer decoder Model Inputs:", [node.name for node in tokenizer_decoder_onnx_model.graph.input])
print("Tokenizer decoder Model Outputs:", [node.name for node in tokenizer_decoder_onnx_model.graph.output])
print("Tokenizer decoder Model Shape:", [node.type.tensor_type.shape for node in tokenizer_decoder_onnx_model.graph.output])
print("Tokenizer decoder Model Type:", [node.type.tensor_type.elem_type for node in tokenizer_decoder_onnx_model.graph.output])
# Merge the tokenizer coder and model ONNX files
combined_model = onnx.compose.merge_models(
tokenizer_coder_onnx_model,
model_onnx_model,
io_map=[('input_ids', 'input_ids'), ('attention_mask', 'attention_mask'), ('position_ids', 'position_ids')]
)
# Merge the model and ArgMax ONNX files
combined_model = onnx.compose.merge_models(
combined_model,
ArgMax_onnx_model,
io_map=[('logits', 'logits')]
)
# Merge the ArgMax and tokenizer decoder ONNX files
combined_model = onnx.compose.merge_models(
combined_model,
tokenizer_decoder_onnx_model,
io_map=[('ids', 'ids')]
)
# Check combined ONNX model
inferred_model = onnx.shape_inference.infer_shapes(combined_model)
onnx.checker.check_model(inferred_model)
# Save the combined model
onnx.save(combined_model, onnx_combined_model_path)
# Test the combined ONNX model using an Inference session with ONNX Runtime Extensions
# Initialize ONNX Runtime SessionOptions and load custom ops library
sess_options = ort.SessionOptions()
sess_options.register_custom_ops_library(onnxruntime_extensions.get_library_path())
# Initialize ONNX Runtime Inference session with Extensions
session = ort.InferenceSession(onnx_combined_model_path, sess_options=sess_options, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
# Prepare dummy input text
input_feed = {"input_text": np.asarray([input_text])} # Assuming "input_text" is the input expected by the tokenizer
# Run the model
outputs = session.run(None, input_feed)
# Print the outputs
print("logits:", outputs)
It's possibly an issue because of the mismatch with the underlying node version. I ran into this and I was able to upgrade my node version that resolved the issue.
Run:
sudo npm install n -gn stableI do not see the issue in the stable version as of today: v22.14.0
Reference: Upgrading Node.js to the latest version
The problem is in the way you instantiate person, you can do it in the following way:
person = random.sample( people, 1)[ 0 ].
This way, person will only contain the appropriate text.
The issue is that your version of Java Spark supports Java 17 and later, but it seems that your Java version is higher than 17.
Here is my solution that I think it better optimize
School.objects.annotate(
number_of_class=Subquery(
Class.objects.filter(
school_id=OuterRef("pk"),
is_deleted=False,
# Add additional filters here
).values("school_id").annotate(count=Func(F("id"), function="COUNT")).values("count")
)
)
from ..folder1.file import *
Try this
Found the answer, courtesy of GitHub Copilot:
To execute custom logic before navigation, you can wrap the next/link component with a custom component and handle the logic within that component.
Once you use MediaQuery.of(context).devicePixelRatio or View.of(context).devicePixelRatio,
the size will change as the device size or pixelRatio changes.
Use a constant value like just 160 to have a size of 160 across all screen sizes
I'm not giving an answer, but rather trying to get this to work for me. I put in the Service, Cluster, and DesiredCount, but nothing happens that I can see. I don't know where to go to see any kind of log to tell me whether or not it ran and/or what the error might be.
Any help would be appreciated.
Thanks.
I found that explanation really helpful and solved a similar issue for me
Anyone found a workaround in this except using list for recursive property?
There are a few reasons why these differences might occur:
Some browsers apply a default background to the 'html' element but not the 'body' element, or vice versa. For example, in some browsers, the 'html' element naturally has a white background.
When you set a background color for 'html', it typically extends across the entire viewport, especially if the 'body' doesn’t have an explicit background.
On the other hand, setting a background color for 'body' may only affect the content inside it and might not extend beyond it—this can be noticeable when the page is shorter than the viewport.
Additionally, different browsers handle background rendering in their own way. Some treat the 'html' element as the background for the entire viewport, while others allow the 'body' element to take control.
To ensure consistency across browsers, using a CSS reset like Normalize.css can help override these default behaviors and create a more uniform appearance.
Late to the party, but one obvious case is selections -- dragging from one place to another in a displayed document is a range. That is, unless you place both end just exactly right to get a whole element. So every browser and word processor has had to implement ranges pretty much from the beginning. They just called it a "selection" instead. We all use them countless times every day.
Downgrading the node version from 22 to 14 worked for me
You'd need to redo the compile shader step in the tutorial: https://vulkan-tutorial.com/Drawing_a_triangle/Graphics_pipeline_basics/Shader_modules
run the compile.bat file in your Vulkan/shaders directory.
15 Years later....
Using solutions mentioned by other here (Thanks!) , I ended up using the following code.
I need events to fire only when a tab is clicked.
I have DataGrids in Tabs and selecting any Row would fire the TabControl.SelectionChanged event.
Using this in the TabControl_SelectionChanged event solved my problem.
I added the option to switch by TabItem.Name instead of its SelectedIndex in case I move tabs around in development later.
if (e.OriginalSource is TabControl)
{
var tabControl = e.OriginalSource as TabControl;
if (tabControl.Name == "<YourTabControlName>")
{
switch ((tabControl.SelectedItem as TabItem).Name)
{
case "First_Tab": //First Tabs Name
//DOWORK
break;
case "Second_Tab": //Second Tabs Name
//DOWORK
break;
default:
break;
}
}
}
Run flutter clean and then flutter get.
By extent, should I check if the pointer to the device tree provided to the kernel is NULL?
I don't think the RISC-V specification per se specifies which addresses might be valid to access when the kernel boots. This information must be hardcoded into the kernel, or detected by probing the hardware or BIOS somehow, or provided by the device tree itself. In that last case it is impossible to sanitize the device tree address, so don't. In the other cases I don't think it's worth the effort; I would simply allow whatever happens when you access invalid memory to happen.
To avoid errors, you can use the flutter fire to setup firebase for project. The instructions on how to use flutterfire is given here
https://www.checkout-ds24.com/redir/599344/Cashif0123/
kindly convert it into iframe
whtsapp +923350217783
This was really troubled thing to find, but a simple C++ extension re-install would resolve.
I tried the following ways :
setting Compiler in VSCode C++ extension UI
setting Compiler and other setting in C++ .json files
Still getting the same error, so - uninstall all C++ related extensions from Microsoft, and install them again or only the C++ Extension from Microsoft.
There should be some extension problem in my case
I just investigated some more myself and found this solution making use of the walrus operator:
import numpy as np
a = np.arange(10)
dim_a, dim_total = 2, 4
(shape := [1] * dim_total)[dim_a] = -1
np.reshape(a, shape)
I like that it's very compact, but the := is still not very commonly used.
% echo "your string literal here" | wc -c
25
nice answer and for more information of NEET to visit shrinfda.in
I found that i needed to mention the font style that i have added previously on this
doc.addFileToVFS("Amiri-Regular.ttf", arabicFont);
doc.addFont("Amiri-Regular.ttf", "Amiri", "normal");
doc.setFont("Amiri");
so my headstyles should look like this
headStyles: { font: "Amiri", fontStyle: "normal" }
This is a window pc and the file structure in the search bar is forward slash but in the Index of section in there is Backward slash.
enter image description here
Windows actually supports both types of slashes in many contexts, though backslashes are the standard convention. This mixed use of slashes can sometimes be confusing, but the system generally understands both formats depending on the context.
The search bar might be showing a web URL format (which uses forward slashes)
The "Index of" section is showing the actual Windows file path format (which uses backslashes)
Use [ngClass] instead of *ngClass.
<ol>
<li [ngClass]="{'active': step === 'step1'}" (click)="step='step1'">Step1</li>
</ol>
I've come back to this thread a lot and used adaptations of @Adib's code several times, but I recently got a version that extends functionality to be more complete, as I believe ColumnTransformer and/or Pipeline will throw an error if you try to use get_feature_names_out().
from sklearn.base import BaseEstimator, TransformerMixin
import numpy as np
import pandas as pd
from scipy import sparse
class MultiHotEncoder(BaseEstimator, TransformerMixin):
"""
A custom transformer that encodes columns containing lists of categorical values
into a multi-hot encoded format, compatible with ColumnTransformer.
Parameters:
-----------
classes : list or None, default=None
List of all possible classes. If None, classes will be determined from the data.
sparse_output : bool, default=False
If True, return a sparse matrix, otherwise return a dense array.
"""
def __init__(self, classes=None, sparse_output=False):
self.classes = classes
self.sparse_output = sparse_output
def fit(self, X, y=None):
"""
Fit the transformer by determining all possible classes.
Parameters:
-----------
X : array-like of shape (n_samples, n_features)
Columns containing lists of values.
Returns:
--------
self : object
Returns self.
"""
# Handle DataFrame input properly
if isinstance(X, pd.DataFrame):
X_processed = X.values
else:
X_processed = np.asarray(X)
# Collect all unique classes
if self.classes is None:
unique_classes = set()
for col in X_processed.T:
for row in col:
if row is not None and hasattr(row, '__iter__') and not isinstance(row, (str, bytes)):
unique_classes.update(row)
self.classes_ = sorted(list(unique_classes))
else:
self.classes_ = sorted(list(self.classes))
# Create a dictionary for fast lookup
self.class_dict_ = {cls: i for i, cls in enumerate(self.classes_)}
return self
def transform(self, X):
"""
Transform lists to multi-hot encoding.
Parameters:
-----------
X : array-like of shape (n_samples, n_features)
Columns containing lists of values.
Returns:
--------
X_transformed : array of shape (n_samples, n_features * n_classes)
Transformed array with multi-hot encoding.
"""
# Handle DataFrame input properly
if isinstance(X, pd.DataFrame):
X_processed = X.values
else:
X_processed = np.asarray(X)
n_samples, n_features = X_processed.shape
n_classes = len(self.classes_)
# Initialize the output array
if self.sparse_output:
rows = []
cols = []
for j, col in enumerate(X_processed.T):
for i, row in enumerate(col):
if row is None:
continue
if not hasattr(row, '__iter__') or isinstance(row, (str, bytes)):
continue
for item in row:
if item in self.class_dict_:
rows.append(i)
cols.append(j * n_classes + self.class_dict_[item])
data = np.ones(len(rows), dtype=int)
result = sparse.csr_matrix((data, (rows, cols)), shape=(n_samples, n_features * n_classes))
else:
result = np.zeros((n_samples, n_features * n_classes), dtype=int)
for j, col in enumerate(X_processed.T):
for i, row in enumerate(col):
if row is None:
continue
if not hasattr(row, '__iter__') or isinstance(row, (str, bytes)):
continue
for item in row:
if item in self.class_dict_:
result[i, j * n_classes + self.class_dict_[item]] = 1
return result
def fit_transform(self, X, y=None):
return self.fit(X).transform(X)
def get_feature_names_out(self, input_features=None):
"""
Get output feature names for transformation.
Parameters:
-----------
input_features : array-like of str or None, default=None
Input features. Used as a prefix for output feature names.
Returns:
--------
feature_names_out : ndarray of str objects
Array of output feature names.
"""
if input_features is None:
input_features = [""]
feature_names_out = []
for feature in input_features:
feature_names_out.extend([f"{feature}_{cls}" for cls in self.classes_])
return np.array(feature_names_out)
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
олоecwwecwecwecwewec weccewcwecwecwecwec
You should give uv a chance: https://docs.astral.sh/uv/guides/integration/docker/. It’s very actively developed and has very good documentation
Peek into the source here; I think you'll find what you're looking for:
I have the same problem.
try this:
npm update eslint @typescript-eslint/eslint-plugin @typescript-eslint/parser
and then
npm cache clean --force && rm -rf node_modules/ && npm i
The problem with this code was that in the screen composable I create the viewModel with the class constructor, like so:
@Composable
fun CameraScreen(
modifier: Modifier = Modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.current,
cameraViewModel: CameraViewModel = CameraViewModel(),
)
Which doesn't make it part of the composition, nor does it save its state. To solve this I just changed the assignment of the default value as follows:
@Composable
fun CameraScreen(
modifier: Modifier = Modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.current,
cameraViewModel: CameraViewModel = viewModel()
)
And now everything works correctly!
in python 3.9.2 same error, why?
In my case working add to dependencies:
dependencies:
flutter_plugin_android_lifecycle: ^2.0.27
With Propshaft, this is handled by the resolver.
Rails.application.assets.resolver.resolve(logical_path).present?
I examined the target website you're scraping, and since the data isn't loaded via a visible API, your current method is necessary. However, to improve efficiency and speed, consider using an asynchronous approach. Here’s how:
In Python, the convention is to place version bookkeeping (such as __version__ and __author__) before imports. This is a widely followed practice, though it's not strictly required by the language itself.
from ibapi.utils import floatMaxString,decimalMaxString,intMaxString
If you can't import it, please reinstall IBAPI right
A bit more messy, but with less boilerplate is just to unpack the Literal. e.g.:
MyType = Literal["a", "b", "c"]
A, B, C = get_args(MyType)
Below is what I did, thanks to @classikh! It fixed my issue:
// Define your form.
const form = useForm<z.infer<typeof formSchema>>({
resolver: zodResolver(formSchema),
defaultValues: {
email: '',
},
values: {
email: user?.email ?? '',
}
})
In above code 'user' is coming from my hook which gets populated when user is fetched.
After using wwwizer.com I was able to redirect the domain properly. I removed the GoDaddy forward configuration, and opted to use a new A record into the DNS to redirect:
| Type | Name | Data |
|---|---|---|
| A | @ | 174.129.25.170 |
Take a lot at wwwizer for more information.
On project tree (to the right) one needs to expand Android64 target, and there are several folders inside. Configuration, Application Store means aab, Development means apk
Me pasaba lo mismo y probe todo. Al final lo conseguí dandole una patada a la cpu y desactivando el plugin de wordpress gosmtp y go smtp pro en ese orden y volviendo a activarlos en igual orden. luego he dejado clickado las dos opciones de force from email y force from name y ya todo ok. gracias a todos.
For me, it was called on the wrong thread. This will help:
await InvokeAsync(() => NavigationManager.NavigateTo("/chat"));
What is the integral of the function f(x) = sin 2x?
Sorry about the broken link, and I appreciate you pointing it out. Thanks for sharing your project structure—that definitely helps in troubleshooting.
Since you already tried 'ENGINE': 'custom_backend.base' and got an error, here are a few things to double-check:
Make sure every folder in custom_backend has an __init__.py file, so Django recognizes them as proper Python packages.
Try importing it manually with from custom_backend.base import DatabaseWrapper to see if Django can actually find it.
Check if DatabaseWrapper is fully implemented, including attributes like client_class and methods like get_new_connection().
If you are still stuck, let me know what error you are seeing now—I will be happy to help!
n=5
for i in range(n,0,-1):
for j in range(n-i):
print(" ",end="")
for j in range(2*i-1):
if j == 0 or i == n or j == 2*i-2:
print("*",end="")
else:
print(" ",end="")
print()
The programming needs to be inside the PDF (Has always been that way for almost 3 decades)
Then the button sends the data as designed by you, to your server.
So here I am E.T. and phoning home.
This was done in seconds by adding a GUI button as text.
For a simple method see my previous answer https://stackoverflow.com/a/78212967/10802527
There's an issue with building locally with expo.
I used npx expo prebuild to generate the ios folder and then followed the instructions here:
https://reactnative.dev/docs/publishing-to-app-store
This work well!
I delete a folder from the path
c:\flutter
and install Flutter, it works for me.
Create Football Scores Website
Un site Web de foot scor
I'll help you create a football (soccer) scores website. I'll set up a React application with Vite, including Tailwind CSS for styling, and create a basic structure for displaying football scores.
Create Football Scores Website
I've created a football scores website with the following features:
A clean, modern design using Tailwind CSS Match cards displaying: Team names Scores Match status (LIVE, SCHEDULED, or FINISHED) League name Match time Responsive layout that adapts to different screen sizes Color-coded status indicators Sample match data (which you can later replace with real API data) The development server is now running. You'll see a responsive grid of match cards with different statuses and scores. The design is mobile-friendly and includes all the essential information a football scores website should have.
Would you like to add any specific features or make any adjustments to the current design?
SOLVED: By changing file: "Home.css" -> "home.css".
No idea why making it lower-case magically worked. Many of my other *.css files use Pascal Casing, and they are imported just fine.
Hope this helps somebody else!
Got back from 8hours driving truck today and sat down to start coding VSCODE says "You can't use this version of the application, you have macOS 10.15.7. The application requires 11.0 or later"
2 Questions
What ***kw*t at Microsoft thinks it's a good idea to auto upgrade someones dev environments to a version that is incompatible with the OS? Don't know but if I find out I'll throttle the c&&t
Is there a roll back utility that allows me to keep all my installed tools? NO
HOW TO FIX - w/ EXTENSIONS INTACT
To fix Visual Studio Code (VSCode) Catalina 10.15 upgrades to macOS 11.0
Download v1.97 available Here:
https://code.visualstudio.com/download# (says 10.15+ but it's **incompatible**)
https://visual-studio-code.en.uptodown.com/mac/versions (1.97.1 - works)
I renamed Visual Studio Code.app to Visual Studio Code osx11.0.app (DID NOT DELETE IT)
Unzipped the download & dragged it over into the Applications folder.
Ran it. ALL EXTENSION IN TACT - TFFT!!
Turn off AUTOF***DATE:
Code > Preferences > Setting > search "update mode" > set to none > restart VSCode
Afte a small heart attack and a minor meltdown . . back up and running : )
For future REF: Theres a Setting Synch Facility that will synch to GitHub will update when worked it out.
As the other answer suggests, this error is due to a missing DLL file, likely from a plugin. But you do not need to enable the plugin by default. Simply adding it under Plugins under your .uproject file is enough.
Executing the SQL command through a cursor is a way that allows you to change arraysize .
The dummy_gen_function can be modified
conn = await session.connection()
cursor = conn.connection.cursor()
cursor.arraysize = 1000
result = cursor.execute("xxxxxx")
I think you want temp_p_num = p_num inside the for p_num loop and the test temp_p_num == boxes[p_num] should be p_num == boxes[temp_p_num]. IOW The prisoner p_num opens a box with his number in it
Thanks to the comment from Martin Brown, that answered it and it works now as expected!
I'm the maintainer of category encoders. Thanks for reporting this issue. There was a problem in the library, I've fixed it now in version 2.8.1
Also thanks to @Ben for forwarding the issue to the repository
The problem stemmed from a third-party clustering library, which removes markers that fall outside the visible map area, thereby preventing deselection from functioning correctly.
You should use -k instead of -n.
The -k flag filters the output to show only entries that have the specified property key.
The command must be :
ioreg -c IOPlatformExpertDevice -k IOPlatformUUID
For your questions "can this be done in Excel", I believe the short answer is no. There is not a native translation service there, as you have noted. Moving that much data in/out piecemeal is obviously not reasonable.
Excel Power Query won't let you call a DLL or external code unless you write your own connector.
If you're going to be doing this often, then maybe it is worth considering writing a connector for Power Query (https://learn.microsoft.com/en-us/power-query/install-sdk) or an Excel Add-In that completes this step.
PowerBI is for visualisation and reporting. It does include some translation tools. If you're just interested in translating a large volume of structured data, however, BI is probably not the right tool.
Especially if this is a one-off task, I suggest you check the built-in GOOGLETRANSLATE() function in Google Sheets: https://support.google.com/docs/answer/3093331?hl=en . If you actually need to produce an Excel file or BI presentation afterward, you can export the translated data back to Excel and proceed.
Based on the comments it looks like @dr.null's suggestion of setting AutoScaleMode to Dpi on your Form will fix your issue.
for those using new ionic with react & vite -- make sure to use relative url paths; as this is what fix it for me after trying to use absolute paths
Checkout this repository which is a port of the famous Microsoft eshopOnContainers project. https://github.com/harshaghanta/springboot-eshopOnContainers
And I need it to be next to each other and not below each other
Build version: 5.4.56 Build date: 2023-12-19 12:36:30 Current date: 2025-03-15 18:47:02 Device: Samsung SM-A346E
Stack trace:
java.lang.UnsatisfiedLinkError: No implementation found for void com.mojang.minecraftpe.NetworkMonitor.nativeUpdateNetworkStatus(boolean, boolean, boolean) (tried Java_com_mojang_minecraftpe_NetworkMonitor_nativeUpdateNetworkStatus and Java_com_mojang_minecraftpe_NetworkMonitor_nativeUpdateNetworkStatus__ZZZ) - is the library loaded, e.g. System.loadLibrary?
at com.mojang.minecraftpe.NetworkMonitor.nativeUpdateNetworkStatus(Native Method)
at com.mojang.minecraftpe.NetworkMonitor.setHasNetworkType(Unknown Source:63)
at com.mojang.minecraftpe.NetworkMonitor.access$100(Unknown Source:0)
at com.mojang.minecraftpe.NetworkMonitor$1.onAvailable(Unknown Source:26)
at android.net.ConnectivityManager$NetworkCallback.onAvailable(ConnectivityManager.java:4184)
at android.net.ConnectivityManager$NetworkCallback.onAvailable(ConnectivityManager.java:4154)
at android.net.ConnectivityManager$CallbackHandler.handleMessage(ConnectivityManager.java:4607)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:230)
at android.os.Looper.loop(Looper.java:319)
at android.os.HandlerThread.run(HandlerThread.java:67)Build version: 5.4.56
Build date: 2023-12-19 12:36:30
Current date: 2025-03-15 18:47:02
Device: Samsung SM-A346E
Stack trace:
java.lang.UnsatisfiedLinkError: No implementation found for void com.mojang.minecraftpe.NetworkMonitor.nativeUpdateNetworkStatus(boolean, boolean, boolean) (tried Java_com_mojang_minecraftpe_NetworkMonitor_nativeUpdateNetworkStatus and Java_com_mojang_minecraftpe_NetworkMonitor_nativeUpdateNetworkStatus__ZZZ) - is the library loaded, e.g. System.loadLibrary?
at com.mojang.minecraftpe.NetworkMonitor.nativeUpdateNetworkStatus(Native Method)
at com.mojang.minecraftpe.NetworkMonitor.setHasNetworkType(Unknown Source:63)
at com.mojang.minecraftpe.NetworkMonitor.access$100(Unknown Source:0)
at com.mojang.minecraftpe.NetworkMonitor$1.onAvailable(Unknown Source:26)
at android.net.ConnectivityManager$NetworkCallback.onAvailable(ConnectivityManager.java:4184)
at android.net.ConnectivityManager$NetworkCallback.onAvailable(ConnectivityManager.java:4154)
at android.net.ConnectivityManager$CallbackHandler.handleMessage(ConnectivityManager.java:4607)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:230)
at android.os.Looper.loop(Looper.java:319)
at android.os.HandlerThread.run(HandlerThread.java:67)
I agree that this is an unexpected behavior of vertical-align: middle and is probably responsible for a ton of vertical alignment hacks to work around the resulting visual artifacts. I too would have also expected that vertical-align: middle would by default align to the middle of the capital letters rather than the lower case or that there would at least be an alternate property like vertical-align: capital-middle or similar. But no such luck.
For reference: Mozilla Developer Network vertical-align: middle documentation.
The example in your question is unfortunately a little overly complicated to demonstrate the underlying problem. A more obvious demonstration of the problem is when you try to align the center line of an image with the center line through the capital letters of some neighboring text (as follows). I hope that this will help you to extrapolate an answer to your original question.
.imageTextPair {
font-size: 50px;
}
.imageTextPair img {
height: 75px;
vertical-align: middle;
}
<span class="imageTextPair">
<img src="https://www.citypng.com/public/uploads/preview/reticle-crosshair-red-icon-free-png-701751694974301y3cnksxiin.png">
Neighboring text
</span>
As you can see, the middle of the image aligns with the center line through the lower case letters as described by the spec.
I tried solving the capital alignment problem using the new "display: flex" but without any luck. The following is my alternate in development solution using modern day tools:
.imageTextPair {
font-size: 50px;
}
.imageTextPair img {
--imagePixelHeight: 75;
height: calc(var(--imagePixelHeight) * 1px);
vertical-align: calc(0.5cap - ((var(--imagePixelHeight) / 2) * 1px));
}
<span class="imageTextPair">
<img src="https://www.citypng.com/public/uploads/preview/reticle-crosshair-red-icon-free-png-701751694974301y3cnksxiin.png">
Neighboring text
</span>
To explain how this works, the baseline of inline images is by default along the bottom of the image. The vertical-align property can currently take a number value that adjusts this baseline upward or downward. The calculation I perform basically moves the image down from the baseline by half the image height (so the middle of the image aligns with the baseline of the surrounding text), then I move the image upwards by half the height of the capital letters (0.5cap) to align the middle of the image exactly with the middle of the capital letters. This should in theory work with any font because the "cap" unit of measure uses the appropriate metrics inside the font.
You can also do the same thing using font relative units (em) so the image scales with the font if you change the font-size:
.imageTextPair {
font-size: 50px;
}
.imageTextPair img {
--imageEmHeight: 1.2;
height: calc(var(--imageEmHeight) * 1em);
vertical-align: calc(0.5cap - (var(--imageEmHeight) / 2) * 1em);
}
<span class="imageTextPair">
<img src="https://www.citypng.com/public/uploads/preview/reticle-crosshair-red-icon-free-png-701751694974301y3cnksxiin.png">
Neighboring text
</span>
The obvious drawback of this approach is that you need to the know and set the height of the image explicitly in both the height and vertical-align properties. It would obviously be much nicer if we could implicitly use the height of the corresponding image in our vertical-align calculations. If anybody has some ideas of how this might be accomplished (without using JavaScript) I would welcome some further refinement to this approach.
I realize this is an older discussion, but I’m posting this note for anyone who stumbles upon this topic for the first time...
I’m developing the plugin called TLabWebview, a 3D web browser plugin released as open-source software on GitHub, where all features are available for free. It only supports Android platforms, for the VR device, such as the Meta Quest series. However, it works with multiple browser engines (WebView and GeckoView), supports multi-instance functionality, and can render to various targets, including Texture2D and CompositionLayers.
The documentation and API may not be as detailed as those of paid alternatives, but I’m actively enhancing them based on input from users.
I’m optimistic about the plugin’s efficiency since it captures the web browser frame as a byte array and transfers it directly to Unity without any pixel format adjustments (this is a reliable choice for the plugin’s rendering mode). When you opt for HardwareBuffer as the rendering mode, Texture2D and the web browser frame are aligned using HardwareBuffer, making it more efficient than transferring frame pixel data via the CPU.
Here are the links:
- The plugin is available here
- Official page is here
- VR sample is here
There was no better way of doing it, but this feature will now be available in the next release of polars as of release 1.25.2 on the basis of this commit: feat: Enable joins on list/array dtypes #21687.
This project builds a macOS app for Meld
https://formulae.brew.sh/cask/dehesselle-meld#default
Works great
Ok! Problem was in cache, but i forgot that i have deploy server configured in my PhpStorm(Apache) so i ve tried to clear cache in project, but i was need to clear cache under deployment folder. Sory for me being dummy)
As Mike said, it is a "Static Tiles" URL, so it has a 200,000 free tile requests per month.
For any one wondering how to view the content of pandas dataframe in Visual Studio 2022, the following can be done:
Open 'Immediate Window' (Debug->Windows->Immediate)
Optionally run the following commands in the immediate window once:
pd.set_option('display.max_rows', None)
pd.set_option('display.width', None)
pd.set_option('display.max_columns', None)
now type the variable name in immediate window and enter. You will see the data in the dataframe in the immediate window.
got to know the answer. The answer is the above won't work if it is run in a proper cluster. I was testing it locally using intellij without a flink cluster. Hence intellij runs it using local JVM, everything horizontally accessible to process function. But if we run the above in a cluster the FlinkTableEnvSingleton.getTable won't be accessible from an process fucntion and returns a null.
For version > 8.3
If you're using latest image tag with below command, you may see similar issue.
--default-authentication-plugin=mysql_native_password
In that case, you might need to downgrade to version 8.3 and use this mysql_native_password authentication.
You might need to run docker rm mysql or docker compose rm mysql in order to downgrade and rerun docker compose up.
This issue provides more information.
For version 8.4
You may need to add this command --mysql-native-password=ON and remove --default-authentication-plugin=mysql_native_password
For version 9+
Version 9 removed this option --mysql-native-password as mentioned in the release notes.
this turned out to be an issue with my mvc razore view _layout.cshtml file that was being used to render my html (which was then being passed to selectpdf). Several <script> tags were referencing web sites and cdn locations that apparently are no longer available, thus causing the html rendering to fail within selectpdf. Once i removed those script tags, everything started working again...thanks.
I know this is an old thread, but I'll leave this comment for anyone new to this question...
I’m developing the plugin called TLabWebview, a 3D web browser plugin released as open-source software on GitHub, where all features are available for free. It only supports Android platforms, for the VR device, such as the Meta Quest series. However, it works with multiple browser engines (WebView and GeckoView), supports multi-instance functionality, and can render to various targets, including Texture2D and CompositionLayers.
The documentation and API might not be as comprehensive as paid options, but I’m working on improving them with user feedback.
I’m confident in the plugin’s performance because this plugin retrieve web browser frame as byte array and pass it to Unity directly without any pixel format conversion (this is stable option of plugin's rendering mode). If you select HardwareBuffer as rendering mode, Texture2D and web browser frame are synchronized using HardwareBuffer so more efficient than pass frame's pixel data over cpu.
Here are the links:
- The plugin is available here
- Official page is here
- VR sample is here