If you are using Spring 3.x, in application.properties, the following needs to be changed:
spring.redis.host=redis
spring.redis.port=6379
to
spring.data.redis.host=redis
spring.data.redis.port=6379
ANSWER: Not seeing the "homestead" logo doesn't meant that your environment isn't working. I worked through all of my other issues, and my site is working now even though the "logo" doesn't show up.
For Windows you need to add:
import os
if os.name == 'nt':
os.system('color')
HYPER-V
Approach (i):- Today i found the answer that use the option stop service and restart your system it will work.
if approach(i) not working then use this ,
Approach (ii):-
Of course they are redundant. You have to choose in designer or in constructor
Most probably you have a debug version of the app on your device and want to test updating closed testing app. which will not work because the difference in building keys.
Same problem. Did you resolve it?
I have had this exact same problem and I can't find a solution as of writing this.
$states.ErrorOutput doesn't return anything useful.
Until AWS implements a way to reference variables inside the distributed map state, we're out of luck. Search Distributed Map
I would try using a service to store or retrieve outside data that can go with that specific State Machine execution.
One work around is if you are also using Docker you can create the cluster in a docker container. Run the following command in terminal
minikube start --driver=docker
Tried adding this as a comment but stackoverflow did not let me. I don't think AWS allows us to do that. You might have to delete the connector, reset offsets and then re-create it.
Here's a post mentioning it is planned in future releases: https://repost.aws/questions/QUi4UW_uFpTdSeDYXR7MKr2w/is-there-a-way-to-update-connector-configuration-using-msk-connect-api
This seemed to work for me:
https://gist.github.com/JacobWeisenburger/f43f4635be7bfb8328823103bfaf3e5f
/**
CORS blocks this request from the browser, so you will need to run this code on the server.
*/
export async function checkIfYouTubeChannelIsLive ( channelId: string ) {
const liveCheck = `https://www.youtube.com/embed/live_stream?channel=${ channelId }`
const responseText = await fetch( liveCheck ).then( res => res.text() )
const isLive = responseText.indexOf( 'This video is unavailable' ) < 0
return isLive
}
Try this
Content-Security-Policy: style-src 'self' https://fonts.googleapis.com 'sha256-KJCiag/ONB9TpGaUe4pEzZMHCxZPfqveZBD6JwsDks8=' 'unsafe-hashes'
If the error persists, feel free to contact me. 👍
I know this is a pretty old post but this is the only thing on the internet I was able to find showing this exact problem. Posting my solution in case anyone else comes across this looking for answers.
I had to help a user with this exact issue today. Developer tab enabled, Source pane opens but doesn't show the buttons at the bottom. After popping out the Source pane and making it smaller, I could see the buttons. Seems like a display issue.
To fix it, in Excel go to File > Options and go to the General tab, at the top under User Interface options toggle to "Optimize for compatibility" and close and re-open Excel. The buttons should remain visible.
I was put off by the message
❗ Because you are using a Docker driver on windows, the terminal needs to be open to run it.
and I ended up here in Stack Overflow. But because I had gotten that message after running the command
minikube service <serviceName> -n <namespace> --url
I was done. The port given was not the port I had specified, but the url that showed worked. All I needed to do was leave that terminal open and copy and paste the url into my browser. The explanation in the minikube documentation helped me understand the situation better.
I recognize that this won't be particularly helpful to the savvy folks, but it would have helped me at the time so I submit it for any other complete noobs like me.
If I understand correctly, you would like (2 x 3) subplots, where the x-axes of each column are aligned.
You can adapt the same code to do so; sharex='col' will share the x-axis per individual column, or alternatively sharex=True will share the x-axis across all subplots:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
y1 = np.sin(x) # Data for the top subplot
y2 = np.cos(x) # Data for the bottom subplot
fig, ax = plt.subplots(2, 3, figsize=(8, 6), sharex='col', gridspec_kw={"hspace": 0.4})
ax = np.ravel(ax) # flatten
for ii in range(3):
ax[ii].plot(x, y1, label="sin(x)", color="blue")
ax[ii + 3].plot((ii + 1) * x, (ii + 1) * y2, label="cos(x)", color="orange")
plt.show()
You can create a SMART folder and put the folder on cyclic method to rerun in every 15 mins after Folder's END and put all 3 jobs inside folder. It will run your jobs like Job A completed, it will pass condition to Job B and then to Job C, only after one cycle completion of all 3 jobs, folder will rerun again and so on.
This is surprisingly hard to find a straight answer about.
Amazon itself does not advertise a maximum image size. They do, however, tell you the max size per layer in their service quota page, which is 52,000 MiB (or ~50GiB).
Other sources state 10TiB or 25TiB but those are unconfirmed by AWS.
Your mileage may vary with the TiB size range, but it's safe to say with a layer size limit is 50GiB, an 8GiB image should be fine.
However, your issue may be with the layer sizes themselves. From personal experience, I've seen more failures when layer sizes exceed 1GiB. Trying to keep layer sizes below 1GiB might be something you would want to try.
I would consider two options:
can't make it with \r, any other options?
your gaussian function seems to have a factor of 2 in, try this instead:
def gaussian(wz, r, I):
"""
wz: Beam width at z=0;
r : Radial coordinates
I : Intensity distribution
"""
Fin = I*np.exp(-(r/wz)**2)
return Fin
Are you sure that 2 supposed to be there?
Solved it.
from anytree import Node, RenderTree
from collections import Counter
import os
import openpyxl
from PIL import Image, ImageDraw, ImageFont
import re
# Create a directory to store the individual name card images
cards_dir = "C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/cards"
os.makedirs(cards_dir, exist_ok=True)
# Load the .xlsx file
file_path = 'C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/YJMB Trumpet Trees.xlsx'
workbook = openpyxl.load_workbook(file_path)
sheet = workbook.active
# Read the data starting from row 2 to the last row with data (max_row) in columns A to N
people_data = []
for row in sheet.iter_rows(min_row=2, max_row=sheet.max_row, min_col=1, max_col=14):
person_info = [cell.value for cell in row]
people_data.append(person_info)
# Tree Data Making
# Dictionary to hold people by their names
people_dict = {}
# List to hold the root nodes of multiple trees
root_nodes = []
# Sets to track parents and children
parents_set = set()
children_set = set()
# Dictionary to track parent-child relationships for conflict detection
parent_child_relationships = {}
# List to store the individual trees as objects
family_trees = [] # List to hold each separate family tree
# Variable to track the current tree number
tree_number = 0 # Start with tree 1
# A counter for nodes without children
end_id_counter = 1
years = []
x_max = 0
# Iterate over the people data and create nodes for each person
for i, person_info in enumerate(people_data, start=2): # i starts at 2 for row index
name = person_info[0] # Name is in the first column (column A)
rat_year = str(person_info[1])[:4] # Year they joined the marching band (second column)
if rat_year.isdigit():
years.append(int(rat_year))
instrument = person_info[2]
parent_name = person_info[7] # Column H for VET (8th column)
children_names = person_info[8:14] # Columns I to N for RATs (9th to 14th columns)
# Determine if the node has children (if any of the children_names is non-empty)
has_children = any(child_name for child_name in children_names if child_name)
if i < len(people_data) and not person_info[7]: # Parent is empty in that row
tree_number += 1 # Increment tree number for the next family tree
# Check if this name is already in the people_dict
if name in people_dict:
# If the person already exists in the dictionary, retrieve their node
person_node = people_dict[name]
# Update the rat_year for the existing person node if necessary
person_node.rat_year = rat_year
person_node.instrument = instrument
else:
# If the person does not exist in the dictionary, create a new node
person_node = Node(name, tree_number=tree_number, id=0, has_children=has_children, rat_year=rat_year, x_coord=None, y_coord=None, instrument=instrument, children_nodes=[]) # Added children_nodes
# If parent_name is empty, this is a root node for a new tree
if parent_name:
if parent_name in people_dict:
parent_node = people_dict[parent_name]
else:
parent_node = Node(parent_name, tree_number=tree_number, id=0, has_children=False, rat_year=None, x_coord=None, y_coord=None, instrument=None, children_nodes=[]) # Added children_nodes
people_dict[parent_name] = parent_node # Add the parent to the dictionary
person_node.parent = parent_node # Set the parent for the current person
parents_set.add(parent_name)
# After setting the parent, update the parent's has_children flag
parent_node.has_children = True # Set has_children to True for the parent node
parent_node.children_nodes.append(person_node) # Add to parent's children_nodes
else:
root_nodes.append(person_node) # Add to root_nodes list
people_dict[name] = person_node # Add the new person node to the dictionary
# Now create child nodes for the given children names
for child_name in children_names:
if child_name:
if child_name not in people_dict:
child_node = Node(child_name, parent=person_node, tree_number=tree_number, id=0, has_children=False, rat_year=rat_year, x_coord=None, y_coord=None, instrument=instrument, children_nodes=[]) # Added children_nodes
people_dict[child_name] = child_node
children_set.add(child_name)
# If the child node has been created, we need to ensure the parent's has_children flag is True
person_node.has_children = True
person_node.children_nodes.append(people_dict[child_name]) # Add child to parent's children_nodes
if child_name not in parent_child_relationships:
parent_child_relationships[child_name] = set()
parent_child_relationships[child_name].add(name)
# After all nodes are created, we calculate x and y coordinates for each node
new_id = 1
start_x_coord = 200
curr_tree = 1
min_year = min(years) if years else 0
max_year = max(years) if years else 0
year_range = max_year - min_year + 1 if years else 0
end_id_counter = 1
# Print out the family trees for each root node (disconnected trees)
for root_node in root_nodes:
family_tree = []
for pre, fill, node in RenderTree(root_node):
family_tree.append(f"{pre}{node.name}")
family_trees.append(family_tree)
# print(f"\nFamily Tree starting from {root_node.name}:")
for pre, fill, node in RenderTree(root_node):
node.id = new_id
new_id += 1
if not node.has_children:
new_tree = node.tree_number
if new_tree != curr_tree:
start_x_coord += 200
curr_tree = node.tree_number
node.end_id = end_id_counter
end_id_counter += 1
node.x_coord = start_x_coord
start_x_coord += 170
else:
node.end_id = 0
if getattr(node, 'x_coord', 'N/A') and getattr(node, 'x_coord', 'N/A') > x_max:
x_max = node.x_coord
# Print details for each node
# print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, X Coord: {getattr(node, 'x_coord', 'N/A')}, Y Coord: {getattr(node, 'y_coord', 'N/A')}, Rat Year: {getattr(node, 'rat_year', 'N/A')}, Instrument: {getattr(node, 'children_nodes', 'N/A')})")
# Now assign X coordinates to nodes where X is None (based on their children)
while any(node.x_coord is None for node in people_dict.values()):
for node in people_dict.values():
if node.has_children:
children_with_coords = [child for child in node.children if child.x_coord is not None]
if len(children_with_coords) == len(node.children): # Check if all children have x_coord
average_x_coord = sum(child.x_coord for child in children_with_coords) / len(children_with_coords)
node.x_coord = round(average_x_coord) # Set the parent's x_coord to the average
# Print out the family trees for each root node (disconnected trees)
for root_node in root_nodes:
family_tree = []
for pre, fill, node in RenderTree(root_node):
family_tree.append(f"{pre}{node.name}")
family_trees.append(family_tree)
# print(f"\nFamily Tree starting from {root_node.name}:")
# for pre, fill, node in RenderTree(root_node):
# print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, Children Nodes: {getattr(node, 'children_nodes', 'N/A')})")
# fix the rat_year attribute for even-numbered generations (done)
# use that to determine y value (done)
# determine x values from the bottom up recursively (done)
# # Print duplicate ids, if any
# if duplicates:
# print("\nDuplicate IDs found:", duplicates)
# else:
# print("\nNo duplicates found.")
#----------------------------------------------------------#
# Tree Chart Making
# Extract the years from the first four characters in Column B (done in lines 51-53 now)
# Calculate the range of years (from the minimum year to the maximum year) (107-109)
# Create a base image with a solid color (header space)
base_width = x_max + 200
base_height = 300 + (100 * year_range) # Header (300px) + layers of 100px strips based on the year range
base_color = "#B3A369"
base_image = Image.new("RGB", (base_width, base_height), color=base_color)
# Create a drawing context
draw = ImageDraw.Draw(base_image)
# Define the text and font for the header
text = "The YJMB Trumpet Section Family Tree"
font_path = "C:/Windows/Fonts/calibrib.ttf"
font_size = 240
font = ImageFont.truetype(font_path, font_size)
# Get the width and height of the header text using textbbox
bbox = draw.textbbox((0, 0), text, font=font)
text_width = bbox[2] - bbox[0]
text_height = bbox[3] - bbox[1]
# Calculate the position to center the header text horizontally
x = (base_width - text_width) // 2
y = (300 - text_height) // 2 # Vertically center the text in the first 300px
# Add the header text to the image
draw.text((x, y), text, font=font, fill=(255, 255, 255))
# List of colors for the alternating strips
colors = ["#FFFFFF", "#003057", "#FFFFFF", "#B3A369"]
strip_height = 100
# Font for the year text
year_font_size = 60
year_font = ImageFont.truetype(font_path, year_font_size)
# Add the alternating colored strips beneath the header
y_offset = 300 # Start just below the header text
for i in range(year_range):
strip_color = colors[i % len(colors)]
# Draw the strip
draw.rectangle([0, y_offset, base_width, y_offset + strip_height], fill=strip_color)
# Calculate the text to display (the year for this strip)
year_text = str(min_year + i)
# Get the width and height of the year text using textbbox
bbox = draw.textbbox((0, 0), year_text, font=year_font)
year_text_width = bbox[2] - bbox[0]
year_text_height = bbox[3] - bbox[1]
# Calculate the position to center the year text vertically on the strip
year_text_x = 25 # Offset 25px from the left edge
year_text_y = y_offset + (strip_height - year_text_height) // 2 - 5 # Vertically center the text
# Determine the text color based on the strip color
year_text_color = "#003057" if strip_color == "#FFFFFF" else "white"
# Add the year text to the strip
draw.text((year_text_x, year_text_y), year_text, font=year_font, fill=year_text_color)
# Move the offset for the next strip
y_offset += strip_height
# Font for the names on the name cards (reduced to size 22)
name_font_size = 22
name_font = ImageFont.truetype("C:/Windows/Fonts/arial.ttf", name_font_size)
# Initialize counters for each year (based on the range of years)
year_counters = {year: 0 for year in range(min_year, max_year + 1)}
# Create a list of names from the spreadsheet, split on newlines where appropriate
for node in people_dict.values():
# Choose the correct name card template based on Column C
if node.instrument and "Trumpet" not in node.instrument:
name_card_template = Image.open("C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/blank_blue_name_card.png")
else:
name_card_template = Image.open("C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/blank_name_card.png")
if node.rat_year:
year_string = str(node.rat_year)[:4]
if year_string.isdigit():
year = int(year_string)
year_index = year - min_year # Find the corresponding year index (from 0 to year_range-1)
name = node.name
# Check if the name contains "VET" or "RAT"
if "VET" in name or "RAT" in name:
name_lines = name.split(' ', 1)
name = name_lines[0] + '\n' + name_lines[1]
elif name == "Xxx Xxxxxx-Xxxxxxx":
name_lines = name.split('-')
name = name_lines[0] + '\n' + name_lines[1] # Add newline after the hyphen
else:
name_lines = name.split(' ')
if len(name_lines) > 1:
name = ' '.join(name_lines[:-1]) + '\n' + name_lines[-1]
else:
name_lines = [name]
# Create a copy of the name card for each person
name_card_copy = name_card_template.copy()
card_draw = ImageDraw.Draw(name_card_copy)
# Calculate the total height of all the lines combined (with some padding between lines)
line_heights = []
total_text_height = 0
for line in name.split('\n'):
line_bbox = card_draw.textbbox((0, 0), line, font=name_font)
line_height = line_bbox[3] - line_bbox[1]
line_heights.append(line_height)
total_text_height += line_height
# Shift the text up by 8 pixels and calculate the vertical starting position
start_y = (name_card_template.height - total_text_height) // 2 - 6 # Shifted up by 8px
# Draw each line centered horizontally
current_y = start_y
first_line_raised = False
for i, line in enumerate(name.split('\n')):
line_bbox = card_draw.textbbox((0, 0), line, font=name_font)
line_width = line_bbox[2] - line_bbox[0]
line_x = (name_card_template.width - line_width) // 2
card_draw.text((line_x, current_y), line, font=name_font, fill="black")
if i == 0 and any(char in line for char in 'gjpqy'):
current_y += line_heights[i] + 7
first_line_raised = True
elif i == 0:
current_y += line_heights[i] + 7
else:
if first_line_raised:
current_y += line_heights[i] - 2
else:
current_y += line_heights[i] + (5 if i == 0 else 0)
# Position for the name card in the appropriate year strip
card_y = 300 + (strip_height * year_index) + (strip_height - name_card_template.height) // 2 # Vertically center in the strip based on year
node.y_coord = card_y
# Assign card and y position attributes to each person
person_node.card = name_card_copy
person_node.y_coord = card_y
# Use the counter for the corresponding year to determine x_offset
year_counters[year] += 1
card_file_path = os.path.join(cards_dir, f"{node.name}.png")
person_node.card.save(card_file_path)
# Paste the name card onto the image at the calculated position
base_image.paste(name_card_copy, (node.x_coord, node.y_coord), name_card_copy)
# Create a list of names from the spreadsheet, split on newlines where appropriate
for node in people_dict.values():
# Add black rectangle beneath the name card if the node has children
if node.has_children:
if len(node.children_nodes) == 1:
child_node = getattr(node, 'children_nodes', 'N/A')[0] # Only one child, so get the first (and only) child
# print(getattr(child_node, 'y_coord', 'N/A'))
# Coordinates for the rectangle (centered beneath the name card)
rect_x = node.x_coord + (name_card_template.width - 6) // 2 # Center the rectangle
rect_y = node.y_coord + (name_card_template.height - 2) # Just below the name card
rect_y_bottom = int(getattr(child_node, 'y_coord', 'N/A')) + 1 # Bottom of rectangle is aligned with the y_coord of the child
# Draw the rectangle
draw.rectangle([rect_x - 1, rect_y, rect_x + 6, rect_y_bottom], fill=(111, 111, 111))
else:
# Calculate the leftmost and rightmost x-coordinates of the child nodes
min_x = min(getattr(child, 'x_coord', 0) for child in node.children_nodes)
max_x = max(getattr(child, 'x_coord', 0) for child in node.children_nodes)
# Calculate the center of the rectangle (between the leftmost and rightmost child nodes)
rect_x = (min_x + max_x) // 2 # Center x-coordinate between the children
rect_y = (node.y_coord + min(getattr(child, 'y_coord', node.y_coord) for child in node.children_nodes)) // 2
rect_width = max_x - min_x
draw.rectangle([rect_x - rect_width // 2 + 75, rect_y + 36, rect_x + rect_width // 2 + 75, rect_y + 6 + 37], fill=(111, 111, 111))
parent_y_bottom = rect_y + 36
# Coordinates for the rectangle (centered beneath the name card)
rect_x = node.x_coord + (name_card_template.width - 6) // 2 # Center the rectangle
rect_y = node.y_coord + (name_card_template.height - 2) # Just below the name card
draw.rectangle([rect_x - 1, rect_y, rect_x + 6, parent_y_bottom], fill=(111, 111, 111))
# Now create a vertical rectangle for each child node
for child in node.children_nodes:
child_x = getattr(child, 'x_coord', 0)
child_center_x = child_x + (name_card_template.width - 6) // 2 # x-center of the child
child_y_bottom = parent_y_bottom # The bottom of the rectangle should align with the parent's bottom
# Draw the rectangle from the center of the child node up to the parent's y-bottom
draw.rectangle([child_center_x - 1, child_y_bottom, child_center_x + 6, getattr(child, 'y_coord', 0) + 1], fill=(111, 111, 111)) # 6px wide
# Print out the family trees for each root node (disconnected trees)
for root_node in root_nodes:
family_tree = []
for pre, fill, node in RenderTree(root_node):
family_tree.append(f"{pre}{node.name}")
family_trees.append(family_tree)
print(f"\nFamily Tree starting from {root_node.name}:")
for pre, fill, node in RenderTree(root_node):
# print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, Children Nodes: {getattr(node, 'children_nodes', 'N/A')})")
print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, Y Coord: {getattr(node, 'y_coord', 'N/A')}, Children: {len(getattr(node, 'children_nodes', 'N/A'))})")
# Save the final image with name cards and black rectangles
base_image.save("YJMB_Trumpet_Section_Family_Trees_2024.png")
base_image.show()
The issue for me was not specifying the version of firebase-functions.
const functions = require('firebase-functions/v1'); // Specify /v1
Adding /v1 solved the issue.
There are always ways to write the script better to speed it up, but I think the question here and same for me is something else. He had a script that ran in half the time on the same machine. So why would scripts slow down without any changes?
How about:
sort -u file1 file2
as of latest version if you're using django-recaptcha instead of importing
from captcha.fields import ReCaptchaField
you should import
from django_recaptcha.fields import ReCaptchaField
I do believe you use Web App - Execute as - User accessing the web app. I tried your code and tested and got the same result as the sample picture given.
To make it work, Change the Web App - Execute as to Web App - Execute as - Me ([email protected]). Also Select Anyone in Who has access.
Execute the app as me — In this case, the script always executes as you, the owner of the script, no matter who accesses the web app.
Execute the app as user accessing the web app — In this case, the script runs under the identity of the active user using the web app. This permission approach causes the web app to show the email of the script owner when the user authorizes access.
You can read more about Web App here - Web Apps
I spent more than an hour to solve this problem. I didn't find it on the web for some reason. Is it only me who had this problem?
Anyway the solution was quite easy but still strange in my opinion. I just should have gone to project settings -> Modules -> expand project's main folder and find a warning there, stating that kotlin library was not found in the module dependencies list:

It was the only place I saw any warnings. Intellij Idea CE didn't say anything: there wasn't any popup about this problem. All of the project settings' menu items too. I mean if you just open Project settings, you won't see this warning, say, in Project menu item, or Libraries, where I would expect such a warning, and in other menu items too, including "Modules". You have to expand these Modules to find this warning.
I think that Jetbrains could do better, considering that Kotlin is their native language. I don't know, maybe Ultimate version notifies about this problem immediately when you just open the project but with CE you will have to spend some time to find it out if you didn't face the problem before.
Go here: https://docs.snowflake.com/en/user-guide/vscode-ext This will redirect you to here: https://docs.snowflake.com/en/developer-guide/snowflake-cli/installation/installation#label-snowcli-install-windows-installer Which will finally redirect you to getting the right source here: https://sfc-repo.snowflakecomputing.com/snowflake-cli/index.html
Check the upgrade documentation to v 31-1: https://www.ag-grid.com/javascript-data-grid/upgrading-to-ag-grid-31-1/#deprecations
showColumnMenuAfterMouseClick- deprecated, use IHeaderParams.showColumnMenuAfterMouseClick within a header component, or api.showColumnMenu elsewhere.
never too late! as i'm facing a similar issue, it seems this is sorted with specifiying the additional accepted headers with
--header="Accept: font/woff2,font/woff,font/ttf,application/font-woff,application/font-woff2,application/x-font-ttf" \
https://www.gnu.org/software/wget/manual/html_node/HTTP-Options.html#index-header_002c-add
Try wrapping the memo field in a hashing utility like MD5HASH. x = Createobject("MD5srv.aaMD5") SELECT Field1, Field2MEMO, x.MD5String(Field2MEMO) as MemoHash FROM DBF() ORDER BY MemoHash
This will identify all memos that have same hash value (distinct). At this point you can do a subsequent JOIN on only unique hashes or select into another table and create a unique index on memohash to only return unique instances
I have fixed the issue. You can get the code from https://codesandbox.io/p/sandbox/stackoverflow-forked-yc4nkm?file=%2Fsrc%2FTheOverlayTrigger.tsx%3A54%2C63.
Change
state.styles.popper.transform = customTransform;
to
state.styles.popper.top = customTransform.top;
state.styles.popper.left = customTransform.left;
Then it be working. 👍
For me solution was like this:
// hooks.server.ts
import { building } from '$app/environment'
export const handle: Handle = async (p) => {
if(building){
// apply different logic here
} else {
const clientIp = p.event.getClientAddress()
}
}
more details: https://svelte.dev/docs/kit/$app-environment#building
From what I understand, getting approval for Knox is an arduous and expensive process. For my Wear OS app that needed kiosk mode, I used a 3rd party app to get there: https://www.42gears.com/products/kiosk-software/android-smartwatch-kiosk-mode/
I thought it was just me! No matter what I tried httpx, aiohttp, etc. it either wouldn't receive the payload or it was cutoff. Non async sseclient works. Did you ever figure this out for async?
There is a tutorial here which may be helpful: https://web.sdk.qcloud.com/trtc/webrtc/doc/en/tutorial-33-advanced-electron-screen-share.html
I spent some time to solve it, for me it worked by changing the language of the page, on the button in the top right corner, when changing the language, on the other screen it showed the images with a problem with a red border!
In my case, I forgot to export the function.
Instead of this:
const handler = async () => {
Should be this:
export const handler = async () => {
Docker containers differ fundamentally from traditional Linux systems in process management. While Linux systems have a complete init system (PID 1) managing all processes, Docker containers run with a single main process that must handle all process management duties.
Direct Background Process:
CMD ["./daemon", "&"]
Init System:
CMD ["tini", "--", "./daemon"]
Process Supervisor:
CMD ["supervisor", "./daemon"]
in short:
Simple daemons → lightweight init system
Complex daemons → process supervisor
Critical services → full monitoring and health checks
control panel -> region ->adminstrative -> change system locale and use Beta UTF 8,restart
I know it is an old thread however I found a simple solution for greek characters https://prnt.sc/UWMo7rcnbj65
for overflow it is easy -> do it with: value & 0xFF But what is best, for underflow?
e.g. vor 8-Bit: (if value goes from 0x00 to -0x01 after an decrement) 0x100 + value
Building on the answer from @mr3k I was able to us the AzureFunctionApp@2 task to deploy out my flex consumption function app but initially got the "Failed to deploy web package to App Service. Not Found (CODE: 404)" error @Crwydryn had mentioned. To resolve this I needed to make two changes:
See https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/IaC/bicep/main.bicep for an example of how to setup the storage account blob container for the deployment
I think you should add key props to <EditorButton pageContent={updates} icon={} label="Save" link="/wiki/test" /> for update.
If it doesn't work, contact me to investigate more. I need to check EditButton component code.
Run this:
php bin/magento dev:query-log:enable
Then your queries should be in var/debug/db.log
I had trouble with the proposed answer by @Batatinha. Hence tried Gitbash. Below command worked without any issues.
CYPRESS_INSTALL_BINARY=0 npm install cypress
DEBUG=cypress:cli* CYPRESS_INSTALL_BINARY=~/Downloads/cypress.zip npx cypress install
Branch analysis is supported only by the commercial Developer Edition of SonarQube and above.
in my case the model name is resolving to windows path like below error:
Any help highly appreaciated
OSError: [Errno 22] Invalid argument: 'C:\PromptFlow\github\promptflow\azureml:honty-prod-innovation:1'
snippet from deployment-yaml file
model: path: azureml:honty-prod-innovation:1
This happens because of extreme logits in my model. imbalanced datasets and small pos_weight values make the logits explode (e.g., 1e20). and this caused the loss to become NaN. I have stabilized gradients.
from torch.cuda.amp import autocast, GradScaler
scaler = GradScaler()
for batch in dataloader:
with autocast():
logits = model(input_ids, attention_mask)
loss = criterion(logits, labels)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
then, I have added a bit of smoothing to reduce over confident prediction.
def smooth_labels(labels, smoothing=0.1):
return labels * (1 - smoothing) + 0.5 * smoothing
smoothed_labels = smooth_labels(labels)
loss = criterion(logits, smoothed_labels)
then, to avoid exploding gradients, I has added an L2 regularization.
reg_lambda = 0.01
l2_reg = sum(torch.norm(p) for p in model.parameters())
loss += reg_lambda * l2_reg
and finally, I had normalized the logits with BatchNorm after nn.Linear.
self.classifier = nn.Sequential(
nn.Linear(self.bert.config.hidden_size, num_labels),
nn.BatchNorm1d(num_labels)
)
Problem solved. everything seems fine now. thanks.
Updated Command for Angular 18 & 19:
Build for production
Avoid deleting the output folder
Enable localization
ng build --configuration production --delete-output-path=false --localize
I think what you're looking for is in the Types module: https://chapel-lang.org/docs/modules/standard/Types.html#Types.max
Note that the Types module is provided by default, so no use or import statement should be necessary to access it.
Wiktor Stribiżew's answer in the comments solved the issue.
Do you have any recorded results stored with the manager object? From the docs:
Use this method to asynchronously query for tremor results recorded by the monitorKinesias(forDuration:) method. The movement disorder manager keeps tremor results for only seven days after the time of recording.
It sounds like you're authorized, and have correct entitlements, but I wonder if there's just no data recorded yet and you need to call monitorkinesias before attempting to get the results.
While OPTION(RECOMPILE) would probably solve your issue, I would suggest reading this article: https://www.sqlinthewild.co.za/index.php/2009/09/15/multiple-execution-paths/ which explains why if/else blocks in a stored procedure can mess with your query execution, and how you can fix it.
I have solved the problem by adding the "/bin" folder to the MANIFEST file in the "Runtime - Classpath" tab
On Mac default shortcut is Option + middle mouse button, not left mouse button as it is on Windows
If your cors configuration is correct, check on permission for storage/log/laravel.log file. The following command solve the issue for me
chmod 777 storage/log/laravel.log
You can refer to my answer here at Opensearch forum: https://forum.opensearch.org/t/how-to-create-react-production-build-of-opensearch-dashboard/20606
I am also posting the Dockerfile here to give you a clear idea on how to containerize the production build of Opensearch Dashboards
### LAYER 1 : Base Image
FROM node:18.19.0
### LAYER 2
# Create a new user and group
RUN groupadd -r opensearch-dashboards && useradd -r -g opensearch-dashboards osd-user
### LAYER 3
# Set the working directory
WORKDIR /home/osd-user/workdir
### LAYER 4
# Copy application code into the container
COPY . .
### LAYER 5
# Create yarnrc file and grant ownership to non root user
RUN touch /home/osd-user/.yarnrc && \
chown -R osd-user:opensearch-dashboards /home/osd-user
# Switch to non root user
USER osd-user
### LAYER 6
# Bootstrap OpenSearch Dashboards
RUN yarn config set strict-ssl false && \
export NODE_TLS_REJECT_UNAUTHORIZED=0 && \
export NO_PROXY=localhost && \
yarn osd bootstrap
### LAYER 7
# Build OSD artifact
RUN export NODE_TLS_REJECT_UNAUTHORIZED=1 && yarn build-platform —linux
# Expose application port
EXPOSE 5601
### LAYER 8
# Build xyz plugin, install xyz plugin
RUN cd plugins/xyz && \
yes "2.13.0" | yarn build && \
cd ../.. && \
cd build/opensearch-dashboards-2.13.0-SNAPSHOT-linux-x64 && \
./bin/opensearch-dashboards-plugin remove xyz && \
./bin/opensearch-dashboards-plugin install file:///home/osd-user/workdir/plugins/xyz/build/xyz-2.13.0.zip
# Start Server
WORKDIR /home/osd-user/workdir/build/opensearch-dashboards-2.13.0-SNAPSHOT-linux-x64/bin
CMD ["./opensearch-dashboards"]
Here is a link with the details:
https://docs.expo.dev/router/reference/troubleshooting/#missing-back-button
Basically put into the _layout file the const:
export const unstable_settings = {
initialRouteName: "index",
};
Since i had multiple versions of the pip and python I had to run them twice
sudo pip3 uninstall pip
and
sudo pip uninstall pip
Thanks to jwenzel and other posts I did find a solution which I want to publish here, so that others do not have to read all the information.
Put either one of the solutions into Program.cs right under
var builder = WebApplication.CreateBuilder(args);
Solution1 Source: How to call .UseStaticWebAssets() on WebApplicationBuilder?
if (builder.Environment.IsEnvironment("DevelopmentPK"))
{
builder.WebHost.UseWebRoot("wwwroot").UseStaticWebAssets();
}
Solution2 Source: Unable to call StaticWebAssetsLoader.UseStaticWebAssets
if (builder.Environment.IsEnvironment("DevelopmentPK"))
{
StaticWebAssetsLoader.UseStaticWebAssets(builder.Environment, builder.Configuration);
}
Replace DevelopmentPK with the name of your Environment, defined in launchSettings.json.
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "DevelopmentPK"
}
Thanks Brian, works perfectly !
Here are a couple of ways to resolve this issue:
The Actual Issue was on the certificate file , I was using the certificate which i have being used to connect from dbeaver , the cert file .crt file to be used for python was different
You are not executing a shell script in your Docker file, you set the shell script as CMD for the container. It is recommended to use absolute paths in ENTRYPOINT (if declared) and CMD (if declared), as it ensures the files can be accessed.
You also can make imports and configs in the docker-compose. Or alternatively try multi staged building for separate operations. The RUNS are definitely going to be slowing you down quite a bit.
if you don't mind sharing the exact error your getting?
in recent time, you can just initialize logger before you use it: Logger log = LoggerFactory.getLogger(<Class_Name>.class);
For what it's worth, having the same issue, I ended up doing the same for the end of the file, which was also causing some crackle.
So it became ulaw_audio_data[800:-800]
There are two versions of IAM database authentication for Cloud SQL essentially.
Manual IAM database authentication (official docs):
For this version you login to the database with the IAM principal (service account for your case) as the database username and pass an OAuth2 access token belonging to the IAM principal as the password.
Note: MySQL and Postgres both format the IAM database username differently. MySQL formats the database username as follows:
For an IAM user account, this is the user's email address, without the @ symbol or domain name. For example, for
[email protected], entertest-user. For a service account, this is the service account's email address without the@project-id.iam.gserviceaccount.com suffix.
When using either version you need to make sure your <App Engine default service account> is formatted accordingly.
Automatic IAM database authentication (official docs):
For this version it requires the use of the Cloud SQL Proxy or a Cloud SQL Language Connector Library (Go, Node, Python, Java). These libraries will essentially manage fetching and continuously refreshing the OAuth2 token in the background and embed it as the password for you.
So as the end user you do not need to pass a password, the libraries or Proxy handle it for you.
.NET AppEngine Recommendation:
My recommendation for a .NET AppEngine app would be to use manual IAM database authentication since unfortunately there is not a Language Connector for .NET and the Proxy can be complex to run alongside your app.
There is a really good blog on Cloud SQL Postgres + IAM database authentication where you can essentially create your own version of automatic IAM authentication through the use of a dynamic password with UsePeriodicPasswordProvider, I wonder if the MySqlConnectionStringBuilder has similar functionality?
Hello and welcome to StackOverflow!
I don't have time right now to make a quick project to test if it works, but have you tried using this method somewhere near the root widget of your software?
A solution without js would be to use aria-invalid selector like so:
input[aria-invalid="true"] {
border-color: #f00;
}
in tailwind v4:
<input class="aria-invalid:border-red-500" />
This is the log of a web browser sending invalid HTTP2 code to the server.
Tomcat should really log the IP so you can fail2ban them.
I have also same problem.
Is there any solution for this?
In all fairness, I would use the comparison:
(Math.Abs(left - right) < double.Epsilon)
I modified a common existing source code package. I have some half-wit goal of releasing it. I have it working with 2 included libraries, base64 and aes encryption. To make it work, I just included the .c source code at the top of the main.c code and used the existing Makefile. Creating .a archive files of these libraries is also a super easy task. Plugging them into gcc also a cakewalk. Trying to figure out how to add these very same .a libraries using the Automake format however is utter insanity. Getting them to auto-compile might as well be a plot to send a bottle rocket to the moon. There just seems no way to do it and no path to success. Take this AC_SEARCH_LIBS (function, search-libs, [action-if-found],[action-if-not-found], [other-libraries]) Am I supposed define every function in the library using this? That can't possibly be right. Without any working examples or some roadmap, the simple act of adding simple goofball libraries to this goofy hodgepodge of crazy, might land me in loony bin. Nothing works. Just sling it in the configure.ac he says. Yeah right. Everything Google AI responded back with is total crap and just pukes more insanity to the screen that leads nowhere. If AI can't even understand it what hope do I have? The manuals for Automake and Autoconf read like stereo installation instructions for a deaf man written in Sanskrit. It will take me months to crack this unless I stumble upon some Rosetta stone. And the most frustrating thing about all of this is the fact that it should just be easy. If I crack this I will document it so the next poor smuck won't have to lose his mind over it.
Based on the Rest API documentation, an 'item' isn't a valid endpoint: https://system.netsuite.com/help/helpcenter/en_US/APIs/REST_API_Browser/record/v1/2024.2/index.html.
If you were to update an assemblyItem your call would be a patch to: https://[accountid].suitetalk.api.netsuite.com/services/rest/record/v1/assemblyItem/{id}
I found a provisional fix. I know it's not the correct way to do it, but I'm learning.
In the main /account page, check if the user is authenticated:
import AccountDetails from "@/components/account/AccountDetails";
import AccountSkeleton from "@components/skeletons/AccountSkeleton";
import { fetchUserData } from "@/actions/user";
import { isAuthenticated } from "@/actions/auth";
import { redirect } from "next/navigation";
export default async function Account() {
const authCheck = await isAuthenticated();
if (!authCheck) {
redirect("/login");
} else {
const userData = await fetchUserData();
if (!userData) {
return <AccountSkeleton />;
}
return <AccountDetails userData={userData} />;
}
}
In this case please run the application in debug mode with a break point on the line that has SpringApplication.run, when it gets to the break point evaluate what the application is doing and the evaluator will tell you where it fails. Most probably there is a bean that is failing to create and it will show you what bean failed to create, resolve that issue and the application should start up as expected. If it still fails continue the same process until all the beans create successfully.
Hope this helps, please check us out: https://www.youtube.com/@cypcode
This is caused by browser's appearance. If browser's appearance is dark then it will occurred. Use --> data-theme="light" in html tag.
<html lang="" data-theme="light">
...
</html>
The only way I have found to know if it is an iphone or an ipad and use logics depending on the device is this:
UIDevice.current.userInterfaceIdiom == .pad
Any other second the iphone can give missing information.
One thing you might need to consider is that they both are performing similar functions. Both NGINX and Keepalived provide similar functionality in terms of failover, but at different layers.
While NGINX handles application-level failover and load balancing, Keepalived manages network-level failover with a Virtual IP (VIP).
In a setup where both are used, they might overlap, but Keepalived is more focused on the availability of the IP address, while NGINX ensures smooth traffic routing at the application layer. If you're already using NGINX effectively for fault tolerance, Keepalived might be redundant unless you specifically need the network-level failover.
Together, I believe they provide both network and application-level fault tolerance.
plt.fignum_exists(plt.gcf().number)
I'm not sure if I installed my JetBrains font different but I have to use 'JetBrainsMono Nerd Font Mono' in my terminal configuration to get it work work properly. Otherwise, it just gave a the "you must use a monospace font" error and defaulted to the ugly system default.
Hope this helps anyone else that is having the same problem.
Would check the following
That crons are not disabled in the wp-config.php
define('DISABLE_WP_CRON', true);
Check the error logs to see if there are any fatal errors showing, normally in your php logs, or in your error.log or ./wp-content/uploads/wc-logs/ folder.
I finally managed to make it work.
The URL "/CONTEXT-PATH/api/v3/api-docs works well, I mean URL in this json file are correct.
I copied the swagger app into webapp folder and I customized swagger-initializer to set server URL
upgrade the version of the spring-boot-starter-parent. it worked for me. Got o start.spring.io and you will see the latest springboot version, as in: 4.0.0
I suppose the HTML file is something like the one below:
<!DOCTYPE html>
<html>
<head>
<title>Test</title>
</head>
<body>
<code id="user-code">SRQX-FJXQ</code>
</body>
</html>
And you want to get SRQX-FJXQ. Here is the Robot code:
*** Test Cases ***
Get Code
Open Browser ${path_to_your_html} chrome
${code}= Get Text xpath=//*[@id="user-code"]
Log To Console User code value: ${code}
Close Browser
Here is the result:
Did you find the solution? I have the same exact error
Did you manage to solve it? I'm having the same problem.
No, we do not publish the IP addresses of webhooks and have been encouraging developers to verify the payload signature instead: https://aps.autodesk.com/blog/webhooks-backend-system-upgrade-and-ip-addresses-change
Have a look at this utility: https://github.com/petrbroz/svf-utils
This part shows how you can use it to download SVF content: https://github.com/petrbroz/svf-utils/blob/develop/samples/download-svf.js
If you need other physics parameter, you can do this
physics: const AlwaysScrollableScrollPhysics().applyTo(const ClampingScrollPhysics()),
Thank you for the question. I think the blog post was misleading for newer versions and the blog post is edited to provide the correct information Currently, you can change the database type in this was
/usr/local/antmedia/start.sh -m standalone -h mongodb://[username]@:password]@[url]
For more information, you can also visit the documentation
I just solved the problem. I mistakenly set critic_loss to be
critic_loss: Tensor = torch.mean(
F.mse_loss(
self.critic(cur_observations),
advantages.detach(), # notice this line
)
)
but it should be
critic_loss: Tensor = torch.mean(
F.mse_loss(
self.critic(cur_observations),
td_target.detach(), # notice this line
)
)
After correcting the loss expression, the agent converged to the safer path after 2000 episodes.

==== strategy ====
> > v
^ > v
^ x ^
I used the NuGet manager and installed the latest version of Newtonsoft.Json. This fixed the issue for me.
Setting table descriptions at the time of table creation is not directly supported by the apache_beam.io.gcp.bigquery.WriteToBigQuery transform. There isn't a parameter for specifying a description, however the schema parameter lets you specify the table schema. Setting a table description requires the following steps:
construct the table independently: Use the BigQuery API or the bq command-line tool to construct the BigQuery table prior to executing your Beam pipeline. This enables you to include a description when creating the table. This guarantees that the table is there before the Beam pipeline tries to write data. For more details refer to this documentation .
Utilize WriteToBigQuery with CREATE_NEVER: In your Beam pipeline, utilize WriteToBigQuery with beam.io.BigQueryDisposition.CREATE_NEVER as the create_disposition argument. As a result, Beam will just publish data to the existing table rather than trying to create the table itself and refer to link1 and link2.
Since it apparently wasn't obvious enough: Being in headless mode triggers their bot-detection and therefore blocks the client
How exactly this is done and how it could be bypassed would require insight into their website code, which they are unlikely to share. As usual there is an arms race between people who want to automate and people who don't want bots on their site, but in terms of puppeteer's headless:false, this battle is lost, since it's too easy to detect.
I did a little experiment to confirm that the password wasn't being set, but apparently it is actually being set. I don't know why I'm getting that warning message though.
The experimient:
const { Client } = pg;
const client = new Client({
user: 'root',
password: 'root',
database: 'qr_orders_db',
});
await client.connect();
Apparently this doesn't throw errors when .env.local is loaded before .env in the docker compose file. Mysteries of life I guess ¯_(ツ)_/¯.
I won't mark my own answer as the accepted one for now because I want to see if someone knows how to get rid of that warning.
I (possibly) found the reason. I had a component and an index ts file like this:
libs/components/src/lib/my-component
my-component.component.ts
my-component.component.html
my-component.component.scss
index.ts
The index.ts file had only one line it, an export:
export { MyComponent } from './my-component.component';
In my tsconfig.json there is a path defined like this:
"@components/*": [
"libs/components/src/lib/*/index.ts"
],
The component was then imported like this:
import { MyComponent } from '@components/my-component';
Removing the index.ts file and just importing the component directly by its actual path solved it.
However, I cannot really say why or if it was just a coincidence.
Have no idea do you need it 5 years later but maybe other will see it who have such problems. In the method LogOut (or similar which you use) you must just make this: Task.Run(() => HttpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme)).Wait();
[a-zA-Z](\.)[a-zA-Z]
will match for a dot encapsulated by uppercase or lowercase letters. The backslash is needed as an escape since the dot is part of regex syntax. How to replace the dot with the underscore depends on what programming language you want to perform this operation with.
It seems you network protocol filter (npf) is filtering out the messages by default. Try to re-trigger the npf by running
net stop npf
and then
net start npf