Even if your app only uses HTTPS (this is 99% of all apps), you still need to file a French encryption declaration.
Under French law (Article L2332-1 of the Defense Code) and ANSSI guidance (source), any cryptographic functionality, including HTTPS, requires at least a simple declaration.
Apple also highlights the need to comply with local encryption regulations even for standard libraries (Apple docs).
There are services like NovaStore that can handle the filing if you want to stay compliant.
It is not cached as you assume. The statement you show that uses f() is a top-level statement and therefore it only runs once when the module is first imported. It doesn't matter how many times you expect a to equal 1, you're not re-executing any code whatsoever. The only time that the code in the ./asd module ran was when the module was first imported, when zero test cases had been executed.
The reason for this error is because in that SDK version, the project is configured to use expo-router instead of navigation. Expo-router is built on top of navigation. That's why the NavigationContainer nesting is created.
You are still executing the following instructions after an at end condition occurs therefore the record you are getting is the part of the last record in the file which remains in the files buffer.
better code would be:
read file at end move 'y' to at end-sw. (end-sw was defined in working storage as 'n'.)
if end-sw not equal 'y' perform the processing you want. (in another paragraph)
I'm working with same feature and created a db diagram with minimum columns for understanding how it'll work in real application.
Here is my db diagram, where you may get an idea and implement.
Product Variation Wise Price in Ecommerce
Thank you!
If you'd like to stay connected with your friend
truct TodoItem {
let title : String
let state : State
let dueDate : DueDate
let location : Location
let collaborators: [Collaborator]
}
I managed to fix this by adding a new method in the entity.
public function getIsBookable(): ?bool
{
return $this->isBookable();
}
I guess behind the scenes ApiPlatform calls something like `$entity->get{$field}` instead of the classic symfony getter
What is the TextInputLayout border stroke color? For more information, see https://m3.material.io/components/text-fields/specs.
You can use ImageConvertHQ.com — a free online tool that converts images to JPEG with no software needed. It’s fast and supports multiple formats. Plus, no watermarks or login required.
Migrating apps can be a real headache, huh? I feel your pain. I once had a similar struggle when trying to switch frameworks. It sounds like you've really thrown the kitchen sink at this problem. Have you double-checked that all the dependencies for Expo SecureStore are properly installed? Maybe it's a version compatibility issue. Also, did you try looking at the Expo logs in more detail? That might give a clue about what's going wrong deeper down. Any luck yet?
Kanhaiya vikram shinde mi as director we will be
MiMri rijalt cell 1 cell 2 cell 3 cell 4
"This is half Solved!!!"
In [ ]:
# I did my best in the so little time :(((( :(
Image Coloring Problem
In this project, you will tackle the challenge of image colorization, a process that
involves adding color to grayscale images. Image colorization has applications in
various fields, such as restoring old movies and photographs, enhancing satellite
imagery, and assisting in medical image analysis.
The goal is to build a deep learning model that can accurately predict the color
channels of an image given its grayscale version. You will use PyTorch, a popular
deep learning library, to construct and train your model. The project will be
structured around several key tasks, each contributing to the development and
evaluation of your colorization model.
U-Net Architecture
The neural network The neural network needs to take in a noised image at a
particular time step and return the predicted noise. Note that the predicted noise is
a tensor that has the same size/resolution as the input image. So technically, the
network takes in and outputs tensors of the same shape. What type of neural
network can we use for this?
What is typically used here is very similar to that of an Autoencoder, which you
may remember from typical "intro to deep learning" tutorials. Autoencoders have a
so-called "bottleneck" layer in between the encoder and decoder. The encoder first
encodes an image into a smaller hidden representation called the "bottleneck", and
the decoder then decodes that hidden representation back into an actual image.
This forces the network to only keep the most important information in the
bottleneck layer.
In terms of architecture, the DDPM authors went for a U-Net, introduced by
(Ronneberger et al., 2015) (which, at the time, achieved state-of-the-art results for
medical image segmentation). This network, like any autoencoder, consists of a
bottleneck in the middle that makes sure the network learns only the most
important information. Importantly, it introduced residual connections between the
encoder and decoder, greatly improving gradient flow (inspired by ResNet in He et
al., 2015).
Here's a description of the UNet architecture:
1. Contracting Path (Encoder):
• The input to the UNet is typically a grayscale or multi-channel image.
• The contracting path starts with a series of convolutional layers
followed by max-pooling layers.
• Each convolutional layer is usually followed by a rectified linear unit
(ReLU) activation function.
• The number of filters typically increases with the depth of the network,
capturing increasingly abstract features.
• Max-pooling layers progressively downsample the spatial dimensions of
the feature maps, allowing the network to learn hierarchical
representations.
2. Bottleneck:
• At the bottom of the U-shaped architecture lies the bottleneck or
central layer.
• It represents the point where the network switches from the contracting
path to the expanding path.
• The bottleneck layer typically consists of convolutional layers without
max-pooling, allowing the network to capture contextual information.
3. Expanding Path (Decoder):
• The expanding path involves upsampling the feature maps and
concatenating them with feature maps from the contracting path.
• Each step in the expanding path involves an upsampling operation
(e.g., transposed convolution or upsampling followed by convolution) to
increase the spatial resolution.
• The concatenated feature maps from the corresponding contracting
path stage serve as skip connections.
• Skip connections help preserve spatial information and assist in the
precise localization of segmentation boundaries.
This was adapted from Lukman Aliyu
Requirements
• Prepare the data
• Build a U-net architecture
• Train the model on the prepared dataset
• Display 5 images from the training set in 3 formats: original color,
grayscale, and the colorized
• Run inference on 10 images in the test set
• Display the 10 images in 3 formats: original color, grayscale, and the
colorized
1. Setup and Imports
In [ ]:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset, random_split
from torchvision import datasets, transforms
from torchvision.transforms.functional import to_pil_image, resize
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from tqdm import tqdm
import os
2. Load the Dataset
class ColorizationDataset(Dataset):
def __init__(self, dataset, transform_input=None, transform_target=None):
self.dataset = dataset
self.transform_input = transform_input
self.transform_target = transform_target
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
color_img, _ = self.dataset[idx]
gray_img = transforms.functional.to_grayscale(color_img, num_output_channels
if self.transform_input:
gray_img = self.transform_input(gray_img)
if self.transform_target:
color_img = self.transform_target(color_img)
return gray_img, color_img
transform_input = transforms.Compose([transforms.Resize((32, 32)), transforms.ToTensor
transform_target = transforms.Compose([transforms.Resize((32, 32)), transforms.
base_train_dataset = datasets.CIFAR10(root='./data', train=True, download=True)
base_test_dataset = datasets.CIFAR10(root='./data', train=False, download=True)
train_full = ColorizationDataset(base_train_dataset, transform_input, transform_target
test_dataset = ColorizationDataset(base_test_dataset, transform_input, transform_target
train_size = int(0.8 * len(train_full))
val_size = len(train_full)-train_size
train_dataset, val_dataset = random_split(train_full, [train_size, val_size])
batch_size = 16
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# Just Looking at the data and trying too visualize it
random_img_idx = torch.randint(0, 1000, (1,)).item()
print(train_dataset[0][0])
test_image = train_dataset[random_img_idx][0] # 0 for image part in (image, label) tuple.
test_image = resize(test_image, (250, 250), antialias=None) # better visualization
#print(test_image.shape)
#print('Number of channels in test_image: ', test_image.shape[0])
test_image.show()
#to_pil_image(test_image)
In [ ]:
In [ ]:
3. Define the Model Architecture
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 32, 3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(),
nn.Conv2d(64, 128, 3, padding=1), nn.ReLU()
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(128, 64, 3, padding=1), nn.ReLU(),
nn.ConvTranspose2d(64, 32, 3, padding=1), nn.ReLU(),
nn.ConvTranspose2d(32, 16, 4, stride=2, padding=1), nn.ReLU(),
nn.ConvTranspose2d(16, 3, 4, stride=2, padding=1), nn.Sigmoid()
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
class ComprehensiveLoss(nn.Module):
def __init__(self):
super(ComprehensiveLoss, self).__init__()
def forward(self, input, target):
input = torch.clamp(input, 1e-7, 1-1e-7) # Prevent log(0)
loss =-1 * (target * torch.log(input) + (1-target) * torch.log(1-input
return loss.mean()
def train_model(model, train_loader, val_loader, criterion, optimizer, num_epochs
model.to(device)
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
for gray_imgs, color_imgs in tqdm(train_loader, desc=f"Epoch {epoch+1}/
gray_imgs, color_imgs = gray_imgs.to(device), color_imgs.to(device)
outputs = model(gray_imgs)
loss = criterion(outputs, color_imgs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {running_loss / len(train_loader
model.eval()
val_loss = 0.0
with torch.no_grad():
for gray_imgs, color_imgs in val_loader:
In [ ]:
gray_imgs, color_imgs = gray_imgs.to(device), color_imgs.to(device
outputs = model(gray_imgs)
val_loss += criterion(outputs, color_imgs).item()
print(f"Validation Loss: {val_loss / len(val_loader):.4f}")
4. Training the Model
def train_model(model, train_loader, val_loader, criterion, optimizer, num_epochs
model.to(device)
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
for gray_imgs, color_imgs in tqdm(train_loader, desc=f"Epoch {epoch+1}/
gray_imgs = gray_imgs.to(device)
color_imgs = color_imgs.to(device)
outputs = model(gray_imgs)
loss = criterion(outputs, color_imgs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
avg_loss = running_loss / len(train_loader)
print(f"Epoch [{epoch+1}/{num_epochs}], Training Loss: {avg_loss:.4f}")
# Validation loss
model.eval()
val_loss = 0.0
with torch.no_grad():
for gray_imgs, color_imgs in val_loader:
gray_imgs = gray_imgs.to(device)
color_imgs = color_imgs.to(device)
outputs = model(gray_imgs)
loss = criterion(outputs, color_imgs)
val_loss += loss.item()
val_loss /= len(val_loader)
print(f"Validation Loss: {val_loss:.4f}")
4.1 Loss function
In [ ]:
In [ ]:
In [ ]:
In [ ]:
# define your training loop with validation
# ----------------------------
5. Showing Performance on Training
data
def visualize_colorization(model, dataset, device='cpu', num_images=5):
model.eval()
fig, axs = plt.subplots(num_images, 3, figsize=(10, 4 * num_images))
with torch.no_grad():
for i in range(num_images):
gray, color = dataset[i]
gray = gray.unsqueeze(0).to(device)
output = model(gray).squeeze(0).cpu()
axs[i, 0].imshow(to_pil_image(color))
axs[i, 0].set_title("Original Color")
axs[i, 1].imshow(to_pil_image(gray.squeeze(0).cpu()), cmap='gray')
axs[i, 1].set_title("Grayscale Input")
axs[i, 2].imshow(to_pil_image(output))
axs[i, 2].set_title("Colorized Output")
for j in range(3): axs[i, j].axis("off")
plt.tight_layout()
plt.show()
6. Making Inferences
def visualize_colorization(model, dataset, device='cpu', num_images=5):
model.eval()
fig, axs = plt.subplots(num_images, 3, figsize=(10, 4 * num_images))
with torch.no_grad():
for i in range(num_images):
gray, color = dataset[i]
gray = gray.unsqueeze(0).to(device)
output = model(gray).squeeze(0).cpu()
axs[i, 0].imshow(to_pil_image(color))
axs[i, 0].set_title("Original Color")
axs[i, 1].imshow(to_pil_image(gray.squeeze(0).cpu()), cmap='gray')
axs[i, 1].set_title("Grayscale Input")
axs[i, 2].imshow(to_pil_image(output))
axs[i, 2].set_title("Colorized Output")
for j in range(3): axs[i, j].axis("off")
plt.tight_layout()
plt.show()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Autoencoder()
In [ ]:
In [ ]:
In [ ]:
In [ ]:
criterion = ComprehensiveLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-4)
# Train the model
train_model(model, train_loader, val_loader, criterion, optimizer, num_epochs=10
# Visualize on training data
visualize_colorization(model, train_dataset, device=device, num_images=5)
# Visualize on test data
visualize_colorization(model, test_dataset, device=device, num_images=10
I had a private registry container that was causing this issue. my fix for this was to re-push the affected images. the manifests from a recent change were affected. the only change that I had done was to move the underlying registry folder on the local docker instance node from /docker-registry to /mnt/nfs/docker-registry. When I rsync'd the files, something must have affected the manifest sha sum. Not sure if that is the "exact" root cause, but that's the only thing I can think happened, and the fix was to repush the images, which was super fast since all the layers were already present, it created a new manifest sha sum instantly and I was able to pull again.
It may be caused by this problem
@TestPropertySource(locations = { "classpath:application.yml", "classpath:contract-test.properties" })@ActiveProfiles({ "local" })
Check application.yml and contract-test.properties
Ensure that the endpoint IDs for all Actuator-related configurations are legal (lowercase letters, numbers, and hyphens are allowed only -)
Are you trying to randomly change the color of a specific layer inside a LayerDrawable?
If so, you can find the target drawable within the LayerDrawable either by index (getDrawable(int index)
) or by ID (findDrawableByLayerId(int id)
), and then change its color using setTint(int tintColor)
.
If your LayerDrawable is created in XML, you can directly assign an ID to each layer like this:
<item
android:id="@+id/background_layer"
android:drawable="@drawable/bottom_background_drawable" />
Got the same issue Here!
Date 2025-04-26
Workbench 8.0.40
MySQL 8.4.5
It looks like during the process of wizard importing CSV data or even on single row entering data, it will put a bXXXXX (XXXXX as being your data) in front of the wanted BINARY(1) data.
It will also put a b in front of all next data, even if data type was defined in the table editor carefully.
Moving BINARY(1) data in middle of attributes, note all next attributes:
INSERT INTO `noteluq`.`Cours` (`no_cours`, `cours_nom`, `credit`, `cycle`, `actif`, `departement`, `temps_actif`) VALUES ('ADM 1010', 'ADMIN DE GRANDS', '3', '1', b'1', b'ADMIN', b'0001');
The work around I found is moving your (hopefully only) binary attribute to the end of all attributes.
after correction:
INSERT INTO `noteluq`.`Cours` (`no_cours`, `cours_nom`, `credit`, `cycle`, `departement`, `temps_actif`, `actif`) VALUES ('ADM 1010', 'ADMIN DE TI CULS', '3', '1', 'ADMIN', '0001', b'1');
INSERT INTO `noteluq`.`Cours` (`no_cours`, `cours_nom`, `credit`, `cycle`, `departement`, `temps_actif`, `temps_retrait`, `actif`) VALUES ('ADM 1020', 'ADMIN DE GRANDS', '3', '1', 'ADMIN', '0001', '0004', b'0');
The import data table wizard seemed affected by the order data in table has been set. When I try to import data, even I put "actif" at the very end of attributes and save, it appears elsewhere. and creates an error While loading data.
Hope these informations helps
If you've tried everything that's been written above and you have a WordPress website. It might just be that you're missing an index.php
file
Thank you for using DolphinDB.
To resolve this issue, you can forcibly remove the recovering partitions of the table using dropRecoveringPartitions
from the ops
module.
use ops
dropRecoveringPartitions(dbPath="dfs://stock", tableName="stock")
Once that's done, you can safely delete the table using dropTable
.
If you run into any other issues, don't hesitate to ask - we're happy to help!
Solve the problem by upgrading the Windows version,docker windows clients need at least 22H2 version Supports
The workaround is to downgrade the Xdebug version (of course by choosing a version that matches project PHP version).
Others used the same workaround as stated here https://www.reddit.com/r/PHPhelp/comments/q4cd85/xdebug_causing_err_connection_reset/
I don't know where to submit this issue to get it fixed (or at least investigated). Please share if you know where to do it.
Also, please share if you have a better solution.
Hope this helps!
postMessage is the standard approach for sending data from React Native to WebView especially for passing user data. injectJavaScript directly executes code in the WebView context, which is useful for DOM manipulation or calling functions but less useful for just passing data. Your current setup with postMessage and event listeners is the right pattern for this use case
I had this problem with third party sample code, it was linking with the option "-mwindows" in CMakeLists.txt. that's why there was no console output.
make sure not to link with the option "-mwindows"
Multi Architecture
text = input()
exit_text = ['Done', 'done', 'd']
while True:
for letter in reversed(text):
print(letter, end='')
print()
text = input()
if text in exit_text:
break
I’m the maintainer of Plotlars.
Good news—starting with Plotlars 0.9.0 (just released!) you can add a secondary y-axis to any cartesian plot.
The problem was solved by wrapping the prompt with the chat template that Llama models use during instruction tuning. Adding the special tokens to the prompt better steered the model in the right direction.
Here is a code block that demonstrates what worked:
# Prepare a prompt for email re-write task
original_text = "Hi guys, just checking in to see if you finally finished the slides for next week when we meet with Jack. Let me know asap. Cheers, John"
messages = [
{"role": "system", "content": "You are an AI assistant that revises emails in a professional writing style."},
{"role": "user", "content": f"Revise the following draft email in a professional voice, preserving meaning. Only provide the revised email.\n\n### Draft:\n{original_text}"}
]
# Apply the chat template (adds special tokens like <|start_header_id|>, etc.)
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False, # We want the string, not tokens yet
add_generation_prompt=True # Ensures the prompt ends expecting the assistant's turn
)
print("--- Formatted Prompt ---")
print(prompt)
print("------------------------")
856665)))+12215560 G=J end end end end end end end s=#Y return H(G)end,function(R,V)local H=m(V)local Q=function(Q)return s(R,{Q},V,H)end return Q end,function(R)for V=861835+-861834,#R,-547061-(-547062)do S[R[V]]=S[R[V]]+(-836683-(-836684))end if Q then local s=Q(true)local H=Y(s)H[V(353563-353686)],H[V(-923293-(-923168))],H[V(-124745+124636)]=R,C,function()return-1043234-316256 end return s else return i({},{[V(831539+(((268913+(-275522-(-109814)))+(-326386-(-1002313)))+-1610796))]=C,[V(612861-612984)]=R;[V(85515-85624)]=function()return 161596+-1521086 end})end end,function(R,V)local H=m(V)local Q=function(Q,i,Y,X,G)return s(R,{Q,i,Y;X;G},V,H)end return Q end return(r(((-354902+17762895)-(-97778))-847948,{}))(H(G))end)(getfenv and getfenv()or _ENV,unpack or table[V(((36683+(-857651+313452))-509885)+(-9589+(561505-(((-695251+(-175762+-11908))-(-384528))+(-92146+125190)))))],newproxy,setmetatable,getmetatable,select,{...})end)(...)
Adding the line below to
/etc/httpd/conf/httpd.conf
saved my project. A one long running script (2-4minutes) just quit. No error, no log, nothing. Didn't even finished what was send out. I could not find any log relative to the time stamps on the failures. (Alma8, php8, httpd 2.4.37. Something in Apache can also kill a running script and leave NO trail. THANK YOU to the poster who mentions that change.
ProxyTimeout 600
没有大神回答一下吗 我也遇到了同样问题,我是使用NotificationHub.php,错误是一样的
I actually solved this and wanted to post the solution.
The fix was to convert to "SliverGridDelegateWithMaxCrossAxisExtent) and set the "maxCrossAxisExtent" to the width of the image, an then leave childAspectRatio as the width/height of the image.
Ok, it seems Clangd is using gRPC so the example I have be trying to follow is not valid, clangd version 18.1.3 (1ubuntu1).
Thank you for the clues! I had to run the solution from @DmytroMitin through the debugger to grasp what was happening.
package dynamicProperties
import scala.language.dynamics
class DynamicProps(val props: java.util.Properties, val propName: String = "", val prop: Option[String] = None) extends Dynamic:
def updateDynamic(name: String)(value: String): AnyRef =
val newName = if propName.isEmpty then name else propName + "." + name
props.setProperty(newName, value)
def selectDynamic(name: String): DynamicProps =
val newName = if propName.isEmpty then name else propName + "." + name
val newProp = Option(props.getProperty(newName))
DynamicProps(props, newName, newProp)
def applyDynamicNamed(name: String)(args: (String, String)*): Any =
if name != "add" then throw IllegalArgumentException()
for (k, v) <- args do
props.setProperty(k, v)
override def toString: String = prop.getOrElse("n/a")
This is my test code.
import dynamicProperties.DynamicProps
import org.scalatest.funsuite.AnyFunSuite
class DynamicPropsTest extends AnyFunSuite:
test("Set username") {
val sysProps = DynamicProps(System.getProperties)
sysProps.username = "Fred"
assert(sysProps.username.toString() == "Fred")
sysProps.xxx.yyy = "bbb"
assert(sysProps.xxx.yyy.toString() == "bbb")
}
test("Assign java.home") {
val sysProps = DynamicProps(System.getProperties)
val home = sysProps.java.home.toString()
val javaHome: String = System.getProperty("java.home")
assert(home == javaHome)
}
test("Add key/value pairs") {
val sysProps = DynamicProps(System.getProperties)
sysProps.add(username="Fred", password="Secret")
assert(sysProps.username.toString() == "Fred")
assert(sysProps.password.toString() == "Secret")
}
For setting the properties with multiple periods, I noticed the code jumped to selectDynamic
before going to updateDynamic
.
The == operator follows specific type coercion rules. null and undefined are equal to each other by special definition in JavaScript, but they don't equal false because comparison with booleans involves converting the boolean to a number first (false becomes 0), and neither null nor undefined equal 0.
Is there any upwork api to apply to jobs?
For Firebase, you need to run the download commands like "sudo npm install dotenv --save" inside the functions folder; otherwise, it won't work. This was the issue that I just had.
You can create a virtual environment on the USB drive and that will allow you to run cmd for the USB. The file would be something like 'cmd venv.bat'.
The following works to return all rows:
const results = await sequelize.query(
`SELECT id FROM moneys`,
{ type: QueryTypes.SELECT }
);
return results;
This is in line with the documentation at https://sequelize.org/docs/v6/core-concepts/raw-queries .
Without another bridging condition to tell whether the sender on a comment record is the buyer or the seller, the query is under-constrained and COUNT of anything will return the COUNT of the under-constrained Cartesian product, i.e. GROUPing BY only order_id here.
This is because "senders" in the comment system are a completely independent role from buyer or selling in the order system, stemming from "order_responses" having a composite primary key - i.e. order_responses.order_id, unlike order_responses ( order_id, response_id ), is not a key in its own right that can be joined to alone without the addition further binding conditions found as foreign keys elsewhere, e.g. if "orders" had a FOREIGN KEY to requests ( request_id ) that allowed us to separately find a deterministic link to buyer_id.
We would need additional bridging journal entries as an (INNER) JOIN, as exist in the comment_receivers table, to pare down who is sending what to whom here by eliminating records.
You need to place the Exit Sub
before your LowHours subroutine to prevent falling through to it after the loop completes.
This helped me:
Stop all docker processes in Task Manager → Processes.
Stop TiWorker
in Task Manager → Processes.
Restart Trusted Installer in Task Manager → Services.
Restart Docker Desktop.
Detailed description https://stackoverflow.com/a/79594451/12691808. TiWorker suspends Trusted Installer → Trusted Installer suspends Windows Optional Features → Docker Desktop does not respond while Windows Optional Features is stuck.
This helped me:
Stop all docker processes in Task Manager → Processes.
Stop TiWorker
in Task Manager → Processes.
Restart Trusted Installer in Task Manager → Services.
Restart Docker Desktop.
Detailed description https://stackoverflow.com/a/79594451/12691808. TiWorker suspends Trusted Installer → Trusted Installer suspends Windows Optional Features → Docker Desktop does not respond while Windows Optional Features is stuck.
This helped me:
Stop all docker processes in Task Manager → Processes.
Stop TiWorker
in Task Manager → Processes.
Restart Trusted Installer in Task Manager → Services.
Restart Docker Desktop.
Detailed description https://stackoverflow.com/a/79594451/12691808. TiWorker suspends Trusted Installer → Trusted Installer suspends Windows Optional Features → Docker Desktop does not respond while Windows Optional Features is stuck.
As of Flutter 3.31 (beta channel) Flutter experimentally supports hot reload on web as well with a flag on run:
flutter run -d chrome --web-experimental-hot-reload
Make sure:
Phone and PC on same Wi-Fi
Firewall temporarily disabled
Spring Boot binds on 0.0.0.0
Use correct IP and port in Flutter app
I'm not sure what the exact cause of this issue is, but I tried to remove the console.log from the custom block, and it works fine. I'm new to Node.js and Express, thus I can not get the exact issue.
As follows is my example, I tackled the issue with "express-validator": "^7.2.1",
.custom(async (value) => {
const ticket = await Ticket.findOne({ _id: value })
if (!ticket) {
throw new Error("NOT_EXIST")
}
// console.lg('ticket result', ticket)
return true
})
I know it's late, but I liked to share my experience and archive this solution for newbie learners like me.
I had a component that I know was breaking, but I didn't know why. My solution was to run this component inside a useEffect(() => setTimeout(() => renderComponent(), xx), []).
With this, I was able to capture the thrown error in the browser.
Argh you React for swallowing errors !
I am not sure if it is taking the default vsix from any of your project sub-directories,
extest gives an option to override the vsix files, if you want to install your own extension, use the -f option to specify the path of your vsix file.
So who what when where why are IDDL in SWITCH TO DISABILITY PAY ONLY AS DDS ON MY SSDI SSID AND AS IF "SUDDEN INFANT DEATH SYNDROME" AND "SUDDEN AFFECTION DISORDER", AFFECTING SOMEONE ELSE AND EFFECTING ME AS YOURE ALL PAIDInG AND E'BETTING,WITH SPECIAL EFFECTS? As SOS,HELP, EASEMENT, AID, RELIEF, ALLEVIATION, RESTITUTION, RETIREMENT, ON ATTIRE AS I AM HEALD AS NUED NUDE CONTINENT AFTER CONTINENT ON BREAK FAST.. TACO BELL. AMAZON. YUMM. TRICON ENTERPRISES. WHILE YOU TREAT AND RETREAT AS SELF IN ME AS ORGAN POOLS AND AS "IF,IM THERE IN THE FLESH.
This happened to me as a beginner a couple of times!
Add this to the header of your request:
KEY: Accept
VALUE: application/json
The server should now respond with a response body!
Just found out that 'forwarding' is not a planned feature.
https://github.com/spring-projects/spring-framework/issues/34834#event-17418510348
Hey I have a pr for this take a look and comment if you have any suggestion
use_react_native!(
:hermes_enabled => true,
...
)
Enable the hermes_engine and try running the app again
I am going to answer this with a cool and easy to follow example, I had to do some research before answering this and simplifying things always help me, so let's first introduce the cache terminology:
Cache: a cache is like a special reading desk where the most frequently used books are placed. Instead of running deep into the shelves every time, the computer can grab the specific book from the desk.
Now how does this relate to your question?
The ArrayList stores data in a row in the reading desk. So when the computer reads one book, it already has the next one nearby, making reading super fast and efficient. Therefore, making sequential access faster.
The LinkedList is like a scattered set of books where you have to follow notes leading to each next book. Every time the computer needs a new book, it has to walk through another shelf.
Now you might want a more "technical" answer, so here it is:
In ArrayList:
Since all elements are on a single block, accessing one element automatically loads nearby elements into the cache. This obviously makes iteration much faster, as the CPU retrieves more elements on one go.
In LinkedList:
Since nodes are scattered across memory, accessing one node doesn’t guarantee the next one is nearby. Each node access involves a pointer dereference, leading to higher cache misses.
As of recently, I'm using PuppyGit. It is available on F-Droid. Open Source app. Works great!
MediaSession is only supported for interacting with Audio and Video players not MediaRecorder, and onMediaButtonEvent callback will be triggered only when the MediaSession is registered and your app should have audio focus granted.
It seems you are trying control the audio recording with your bluetooth headset buttons, you should go with your second approch i.e generic BroadcastReceiver.
Documentation Link : https://developer.android.com/media/legacy/mediasession
I solved this issue by delete local dependency that specify with file:// in package.json.
Yes, ArrayList stores references in a contiguous array, making sequential access faster for the CPU cache. LinkedList nodes are scattered in memory, causing more cache misses.
For me, this can occur when I leave gdb
open at the stage where one would input bt
for too long. If I redo the trace generation with the cached DebugInfoD symbols and same segfault data, the memory becomes accessible.
Using a for
loop and adding items that match the criteria to a new list can often be more performant than LINQ for large lists:
var filteredList = new List<MyObject>();
for (int i = 0; i < myObjects.Count; i++)
{
var o = myObjects[i];
if (o.Value > 100 && o.Date < DateTime.Now.AddDays(-7) && SomeComplexCalculation(o) > 50)
{
filteredList.Add(o);
}
}
Consider optimizing SomeComplexCalculation
itself for better overall performance. If appropriate, you could potentially pre-calculate results of SomeComplexCalculation
and store them as a property on MyObject
to avoid repeated expensive computations during filtering.
[shproc] Process[pid=15031, hasExited=false]
sh: <stdin>[1]: https://www.facebook.com/aman.rezai.161: inaccessible or not found
[shproc] Process[pid=15031, hasExited=true, exitcode=0]
This is a very old question. SweetAlert may have been non blocking originally. I have recently (April 25, 2025) downloaded it. I am able to have it block, be synchronous. It uses Promises.
async function foo()
{
if ( await yesno( "Answer Yes or No" ) {
// They answered YES
} else {
// They answered NO
}
}
async function yesno( question )
{
var rtn;
await swal({
closeOnEsc: false,
closeOnClickOutside: false,
text: question,
buttons: {
no: { text: "No", value: 0 },
yes: { text: "Yes", value: 1 },
},
}).then( value => {
switch (value) {
case 1:
rtn = true;
break;
case 0:
default:
rtn = false;
break;
}
});
return rtn;
}
This is a common pitfall in machine learning workflows!
Here’s the correct order you should follow:
First split your dataset into training and testing sets.
Then fit the StandardScaler only on the training set (i.e., use scaler.fit(X_train)
).
After fitting, transform both the training and testing data (X_train_scaled = scaler.transform(X_train)
and X_test_scaled = scaler.transform(X_test)
).
Why?
If you scale the full dataset before splitting, information from the test set "leaks" into the training process — because the mean and standard deviation used for scaling are computed using all the data, including the test set. This can mess up the model's generalization ability and make evaluation unreliable.
Scaling after splitting makes sure the test data remains unseen and truly independent.
Quick fix for your case:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# First split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Then scale
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
Combine stealth techniques, human-like mouse movement, resilient locators, and network monitoring. Twitter aggressively detects bots through like everything, so masking automation traits and randomizing delays are essential. Use playwright-extra stealth plugins, simulate realistic mouse paths, and prefer network-level authentication checks when possible. For hidden or delayed buttons, keyboard navigation (Tab + Enter) can sometimes bypass detection better than direct clicks. Finally, integrate proxy rotation and CAPTCHA solvers like 2Captcha.
I also had an error 65. In my case it was because of simple syntax errors (missing semicolon, etc.) in code that was, because of preprocessor variables, not built when using Xcode to build a debug version, but was built when using xcodebuild to build a release version. After I built a release version in Xcode, saw and fixed the errors, the error 65 disappeared.
The key here was that xcodebuild wasn't building the same code as Xcode.
Error : unexpected token(s) preceding '{' skipping apparent function body - Actor.h
Solution : Adding #include <string> in Actor.h worked for me
You can write-click, in pgadmin4, the table, click Properties, go to Constraints tab, showing you the Primary Key info, and there you'll see the constraint name (by default tableName_pkey), and the columns. Then, you can, if you wish, click delete and then you can make a new constraint, putting the columns that you wish. It's good to take a screenshot of the constraint, before you delete it, so you can refer to your screenshot, while you make a new constraint.
You can simply use a here-document:
result=$(python <<EOF
import stuff
code = "${code}"
print("Hello from Python")
print("Code variable is:", code)
EOF
)
This way the Python code stays readable, and Bash variables like $code are expanded correctly
Crashed error because your Docker setup probably lacks enough shared memory and your browsers.json has incorrect path settings. To fix it, set shm_size: "2g" in your docker-compose.yml under the Selenoid service and change the Chrome path to "/wd/hub" in browsers.json. Also, remove --headless from your Chrome options, as VNC images require a visible browser. After these changes, your tests should run without crashes.
First, create a dictionary object to store unique values, then loop through Column C of DataSheet (starting at row 2 to skip header), then add each non-empty value to the dictionary (dictionaries automatically handle uniqueness), finally transfer the unique values from the dictionary to Column 16 of the Divide sheet
You cannot compare 'a'
, which is of type str
, with 97
, which is of type int
.
To do that, you need to convert 97
, which is an integer, into a character using the chr()
function.
>>> print('a' == chr(97))
True
Store files privately. Never expose direct links.
Authenticate every access through your backend — validate user → fetch file → stream it.
Use short-lived tokens if needed, and log all access.
Security by design, not by obscurity.
I am in BingX The api key was removed, while it had a deadline, and then they manipulated my account, my account data was deleted, for example, they deleted the orders related to several currencies that I had purchased, how can I get that data back?
I had a similar experience during my university course. Initially, Z notation felt theoretical, but when I applied it to a project — "Hospital Appointment System" — for requirement gathering, it made a big difference. Using Z helped me clearly define system behavior and logic without ambiguity. I realized that when you apply Z notation in real projects, especially during early requirement analysis, you truly understand its value. It brings precision and clarity that is hard to achieve with informal methods. That's why it’s prioritized in critical systems where correctness matters.
The error explicitly states that the view is not updatable because it references multiple tables/views. Even if view1 and view2 are updatable themselves, combining them with a FULL OUTER JOIN makes the top-level view non-updatable. check:
If username and password are set in application.properties file then we can't get generated security password.
Removing of below properties from application.properties file will generate it,
spring.security.user.name= (your username)
spring.security.user.password= (your password)
The React doesn't have a way to know that you have updated some data via fetch. You have to let it know something changed.
I'm not exactly sure what you
useAsync
hook does, but I suppose it just fetches the data from a backend, while watching out for the value changes in the dependency array ([id]
here).
First of all, I just want to make clear that this is my subjective recommendation, not anything objectively best.
You've got a good idea about handling this manually via state with comments. It's going to work perfectly fine, you just need to add the state, useEffect for watching changes in book
object, and handle the comment adding via passed setComments
hook to the CommentForm component.
You can do some more optimiziation in the solution described above, but what I'd really like to mention is React's new useOptimistic
hook. It's meant exactly for use cases like this. Basically what it's meant to do is optimistically update UI with the new data before the actual fetch to the backend completes, providing good UX. And if it fails, it's seamless to rollback.
In your scenario, you would add the useOptimistic hook alongside the comments
useState hook:
export function BookItem() {
const [comments, setComments] = useState<{
userId: string,
text: string
}[]>([]);
const [optimisticComments, addOptimisticComment] = useOptimistic(
comments,
(state, newComment) => [
...state,
{
...newComment,
sending: true
}
]
);
const { bookId } = useParams<{ bookId: string }>();
const id = parseInt(bookId, 10);
const { data: book, loading } = useAsync(
() => bookService.loadBookWithComments(id),
[id]
);
useEffect(() => {
setComments(book.comments);
}, [book.comments]);
if (loading) return <p>Loading...</p>;
if (!book) return <p>Book not found</p>;
return (
<div>
<h1>{book.name}</h1>
<ul>
{optimisticComments.map((c, index) => (
<li key={index}>
<strong>{c.userId}</strong>: {c.text}
// optionally some loading animation if "c.sending"
</li>
))}
</ul>
<CommentForm
bookId={book.id}
addOptimisticComment={addOptimisticComment}
setComments={setComments}
/>
</div>
);
and in the CommentForm:
export function CommentForm({ bookId, addOptimisticComment, setComments }: {
bookId: number,
// rest of the types here
}) {
const [text, setText] = useState("");
const { trigger } = useAsyncAction(async () => {
const newComment = { userId: "Me", text };
addOptimisticComment(newComment);
await bookService.createNewBookComment(bookId, text);
setComments(comments => [...comments, newComment])
});
// ... rest of the code
);
🗒️ And just a quick note, a little downside of this solution is not being to able to use comment ID as an index in
.map
method. This is obviously because you don't have an ID before your backend responds with a generated one. So keep this in mind.
If you have any questions regarding the usage, of course feel free to ask.
I found this error mid of the project. I think it's a warning, not an error. Can anyone explain my given question?
In PostgreSQL, updating a view is feasible, but only if the view is straightforward and solely based on one table. The view cannot be updated directly if it contains multiple tables (as with a JOIN). When the view contains multiple tables, you can specify how updates should be applied by using INSTEAD OF triggers. This enables you to define unique logic for updating the underlying tables whenever the view is modified
You said that you tried with getter but wasn't successfull, I will quote what was said in this answer. plainToInstance will return an instance so you won't be able to see the computed getter property. To do so, use instanceToPlain which is meant to serialize that object.
Very simple with a Regex
private static boolean isNumeric(String str){
return str != null && str.matches("[0-9.]+");
}
Sounds user-assigned managed identity could do. Create one and try to follow instructions:
Log in with a user-assigned managed identity. You must specify the client ID, object ID or resource ID of the user-assigned managed identity with --username.
az login --identity --username 00000000-0000-0000-0000-000000000000
It will solve your issue.
I've seen A LOT of counterintuitive answers for this, i really wanna clear the confusion for everybody.
I actively searched for this question so i can write the answer for it.
It's simple.
-------------------------------------------------------------------------------------------------------
An example would be helpful, so let's say i have a Grid
(ancestor) that contains a Button
(descendant).
Each of them can have Preview and no-Preview event for the SAME action (the no-Preview is called bubbling or something but it sounds very stupid to me so i won't call it bubbling).
Let's say the event is MouseRightButtonDown
.
Both Grid
and Button
can catch the MouseRightButtonDown
event and do something when they catch it with a method in code-behind (obviously),
they both can also catch the PreviewMouseRightButtonDown
event, now we have 4 methods.
Obviously when you do a mouse down at the Button
, you'll also hit the Grid
, so which method will run first?
The order is Preview -> no-Preview
.
In Preview, the order is Ancestor -> Descendant.
In No-preview, the order is Descendant-> Ancestor.
When you set e.Handled = true
in any of the 4 methods, it'll prevent the next methods to run (except if you do something with the HandledEventsToo
, but i don't know anything about this yet).
In ScheduledExecutorService, if a task throws an exception and it’s not caught inside the task, the scheduler cancels it automatically.
In my code, the RuntimeException caused the task to stop after the first run.
To fix it, I should catch exceptions inside the task using a try-catch block, so that the scheduler can continue running the task even if an error happens
For v4, the CLI interface has been moved to the @tailwindcss/cli package:
npm install @tailwindcss/cli
npx @tailwindcss/cli
https://github.com/tailwindlabs/tailwindcss/discussions/17620
The following code
open System
type UserSession = {
Id: string
CreatedAt: DateTime
LastRefreshAt: DateTime option
}
type Base () =
let sessions = [{Id="1"; CreatedAt=DateTime.Now.AddMonths(-1); LastRefreshAt=None};
{Id="2"; CreatedAt=DateTime.Now.AddMonths(-1); LastRefreshAt=Some(DateTime.Now)};
{Id="3"; CreatedAt=DateTime.Now.AddMonths(-1); LastRefreshAt=Some(DateTime.Now.AddDays(-15))}]
member this.Delete (f : UserSession -> bool) =
List.map f sessions
type Derived () =
inherit Base()
member this.DeleteAbandoned1 (olderThan:DateTime) =
base.Delete (fun session ->
session.CreatedAt < olderThan &&
// error: Value is not a property of UserSession
(session.LastRefreshAt.IsNone || session.LastRefreshAt.Value < olderThan)
)
member this.DeleteAbandoned2 (olderThan:DateTime) =
base.Delete (fun session ->
session.CreatedAt < olderThan &&
// error: Value is not a property of UserSession
(session.LastRefreshAt.IsNone || session.LastRefreshAt < Some(olderThan))
)
let t = Derived ()
printfn "%A" (t.DeleteAbandoned1(DateTime.Now.AddDays(-14)))
printfn "%A" (t.DeleteAbandoned2(DateTime.Now.AddDays(-14)))
will output
[true; false; true]
[true; false; true]
val it: unit = ()
in .Net 9, i.e. both versions of your DeleteAbandoned seem to work. Both in FSI and compiled version as well. So there seems to be something else going on in your code. Could you provide some additional details?
IndexedDB is a low-level NoSQL database built into the browser.
Storage limit: Often hundreds of megabytes to several gigabytes, depending on the browser and the device.
You can store structured data like JSON objects, blobs, and even files (like .xml).
Asynchronous and powerful, but a little more complex to use than LocalStorage
Text("Hello, world!")
.accessibilityLanguage("en")
Text("안녕하세요")
.accessibilityLanguage("ko")
Text("안녕하세요: 24")
.accessibilityLanguage("ko")
thats good,lfg bro.mother fucka
Ok, I use plain Text auth for success test. I copied the output of this command to the client.properties:
kubectl get secret kafka-user-passwords --namespace kafka -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1
And the client.properties file looks like this:
security.protocol=SASL_PLAINTEXT
#sasl.mechanism=SCRAM-SHA-256
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="user1" password="OUTPUT_OF_GET_SECRET";
One thing to consider is that like any other requests, it can be intercepted using something like Proxyman. This means that unless you encrypt the files yourself, the user can intercept the download and get full access to them.
This is an old post, but in case anyone is still looking at it, it seems the underlying issue was resolved beginning in PHP 7.4, when loadHTML() was upgraded to handle HTML5 tags. To work properly, this also requires libxml2 version 2.9.1 or later.
I'm also facing a bit similar problem, once i change the IP address to static address, it refuses to connect to the network and I'm failing to make it the domain
• Create a valid SSL certificate with proper fields (SubjectAlternativeName, Server Authentication).
• Or install your CA certificate manually on the iOS device (Settings > General > About > Certificate Trust Settings).
• Or better: use real trusted certificates (for example, from Let’s Encrypt).
You can force your AVD to use a custom resolution by editing its config.ini. Here’s how:
Locate your AVD folder and open config.ini
On macOS/Linux it’s usually under ~/.android/avd/:
Add or modify these lines (create them if they don’t exist):
hw.lcd.width=1080
hw.lcd.height=2340
Save and exit and restart avd
In general you can consult your log file CMakeOutput.log
Maybe configure it without openssl?
./configure -- -DCMAKE_USE_OPENSSL=OFF
ref: https://discourse.cmake.org/t/how-to-compile-dcmake-use-openssl-off/1271