How do i add a column to indicate whether the mismatch appears in df1 or df2?
Instead of @PostConstruct, listen to the ApplicationReadyEvvent
this is built in to phpunit now: https://github.com/sebastianbergmann/phpunit/pull/6118
x gxvcx vxcxc xcdsf sdfsdgsd gdsg sdgsdgsdg sdgsdgs
please help me,
Thank you so much. This fixed my problem!
Any instruction on how to do it? For example, how to extract application from card and reinstall it but with different AID value
have you find a solution yet? I've tried cleaning node modules, pods and reinstalling but nothing works
This link explain how install odoo + postgree on alpine linux
i need help with this error ExternalError: TypeError: Cannot read properties of undefined (reading 'tp$mro') on line 5 in dacoolthing.py
this is the code:
from turtle import *
class char(Turtle):
def __init__(self):
super().__init__()
self.penup()
self.shape("turtle")
self.goto(0,0)
self.speed(0)
def attack():
print()
I am having the same problem, was there any resolution?
I tried manually modifying the requirements.txt file to add libglib2.0, libnss3, libgconf, and libfontconfig1 as shown on this other thread, but it didn't seem to have any effect.
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 127
Also tried connecting with SSH to pip install selenium directly in hopes the chrome driver dependencies would get updated.
Any luck on this? I’m having the same issue.
Did you ever found the solution to this?
Im in exactly the same spot right now. I had to change the port to 8082 as well, though my issue lies with grafana. The metrics are not being output to prometheus for some reason :(
There is a recipe hosted in the meta-python-ai layer https://layers.openembedded.org/layerindex/recipe/403973/
can you share your code where we can use that and replicate the issue on our end as this way its immpossible t understand what is causing it or is your website hosted online? if yes, then share the url
thanks your method sort help me a lot to implement algorithm bubble sort in data structure linkedlist
Great question and great answer! https://stackoverflow.com/users/12109788/jpsmith
I have added more variables in c() and I can't get it to sum the medians etc for each variable. Instead I get a list per hour from the first value. How can I fix this?
library(dplyr)
library(chron)
library(gtsummary)
chrontest <- chestdf %>%
select(tts_sec, ttl_sec, ttprov1_sec, deltatrop_sec, vistelse_sec) %>%
drop_na() %>%
mutate(across(ends_with("_sec"), ~ format(as.POSIXct(.), "%H:%M:%S"))) %>%
mutate(across(ends_with("_sec"), ~ chron::times(.)))
summary_table <- chrontest %>%
tbl_summary(
include = c("tts_sec", "ttl_sec", "ttprov1_sec", "deltatrop_sec", "vistelse_sec"),
label = list(
tts_sec ~ "Tid till S",
ttl_sec ~ "Tid till L",
ttprov1_sec ~ "Tid till provtagn 1",
deltatrop_sec ~ "Tid till provtagn 2",
vistelse_sec ~ "Vistelsetid"
),
type = list(
all_continuous() ~ "continuous2"
),
statistic = list(
all_continuous() ~ c(
"{mean}",
"{median} ({p25}, {p75})",
"{min}, {max}"
)
),
digits = list(
all_continuous() ~ 2
)
)
I'll be the first to admit that it may not work for everyone or in every use case, but it works for what I intended.
Since it's been a while since posting the question, naturally a good bit has changed in my implementation of Sanity, but you shouldn't have any issues adapting, to your own project with minor changes.
I'd like to start by addressing the changes I've made since posting the question. Please keep in mind all changes listed here were created with Next.js 15 and—more specifically—the next/image
component in mind. You may need to make modifications if this does not apply to you.
I no longer use the imageUrlFor
, compressWidthAndHeight
, or prepareImage
functions to generate src
attribute and other image props. Instead I take advantage of the GROQ query step by pulling in the information I need and creating the src
at this level. I created a helper function for querying images with GROQ, since there are many different scenarios that require different functions on the src
.
If you're using TypeScript like I do, here's the definitions you'll need:
export type SanityCrop = {
top: number
left: number
bottom: number
right: number
}
export type SanityHotspot = {
x: number
y: number
width: number
height: number
}
export type SanityImage = {
_id: string
alt?: string
aspectRatio?: number
blurDataURL: string
crop?: SanityCrop
height?: number
hotspot?: SanityHotspot
filename?: string
src: string
width?: number
}
All descriptions in the GroqImageSourceOptions
type are copied from Sanity – Image transformations – Image URLs. You're welcome to use this in your own projects if you want.
type GroqImageSourceOptions = Partial<{
/** Automatically returns an image in the most optimized format supported by the browser as determined by its Accept header. To achieve the same result in a non-browser context, use the `fm` parameter instead to specify the desired format. */
auto: 'format'
/** Hexadecimal code (RGB, ARGB, RRGGBB, AARRGGBB) */
bg: string
/** `0`-`2000` */
blur: number
/** Use with `fit: 'crop'` to specify how cropping is performed.
*
* `focalpoint` will crop around the focal point specified using the `fp` parameter.
*
* `entropy` attempts to preserve the "most important" part of the image by selecting the crop that preserves the most complex part of the image.
* */
crop:
| 'top'
| 'bottom'
| 'left'
| 'right'
| 'top,left'
| 'top,right'
| 'bottom,left'
| 'bottom,right'
| 'center'
| 'focalpoint'
| 'entropy'
/** Configures the headers so that opening this link causes the browser to download the image rather than showing it. The browser will suggest to use the file name provided here. */
dl: string
/** Specifies device pixel ratio scaling factor. From `1` to `3`. */
dpr: 1 | 2 | 3
/** Affects how the image is handled when you specify target dimensions.
*
* `clip` resizes to fit within the bounds you specified without cropping or distorting the image.
*
* `crop` crops the image to fill the size you specified when you specify both `w` and `h`.
*
* `fill` operates the same as `clip`, but any free area not covered by your image is filled with the color specified in the `bg` parameter.
*
* `fillmax` places the image within the box you specify, never scaling the image up. If there is excess room in the image, it is filled with the color specified in the `bg` parameter.
*
* `max` fits the image within the box you specify, but never scaling the image up.
*
* `min` resizes and crops the image to match the aspect ratio of the requested width and height. Will not exceed the original width and height of the image.
*
* `scale` scales the image to fit the constraining dimensions exactly. The resulting image will fill the dimensions, and will not maintain the aspect ratio of the input image.
*/
fit: 'clip' | 'crop' | 'fill' | 'fillmax' | 'max' | 'min' | 'scale'
/** Flip image horizontally, vertically or both. */
flip: 'h' | 'v' | 'hv'
/** Convert image to jpg, pjpg, png, or webp. */
fm: 'jpg' | 'pjpg' | 'png' | 'webp'
/** Specify a center point to focus on when cropping the image. Values from 0.0 to 1.0 in fractions of the image dimensions. */
fp: {
x: number
y: number
}
/** The frame of an animated image. The only valid value is 1, which is the first frame. */
frame: 1
/** Height of the image in pixels. Scales the image to be that tall. */
h: number
/** Invert the colors of the image. */
invert: boolean
/** Maximum height. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
maxH: number
/** Maximum width in the context of image cropping. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
maxW: number
/** Minimum height. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
minH: number
/** Minimum width. Specifies size limits giving the backend some freedom in picking a size according to the source image aspect ratio. This parameter only works when also specifying `fit: 'crop'`. */
minW: number
/** Rotate the image in 90 degree increments. */
or: 0 | 90 | 180 | 270
/** The number of pixels to pad the image. Applies to both width and height. */
pad: number
/** Quality `0`-`100`. Specify the compression quality (where applicable). Defaults are `75` for JPG and WebP. */
q: number
/** Crop the image according to the provided coordinate values. */
rect: {
left: number
top: number
width: number
height: number
}
/** Currently the asset pipeline only supports `sat: -100`, which renders the image with grayscale colors. Support for more levels of saturation is planned for later. */
sat: -100
/** Sharpen `0`-`100` */
sharp: number
/** Width of the image in pixels. Scales the image to be that wide. */
w: number
}>
function applySourceOptions(src: string, options: GroqImageSourceOptions) {
const convertedOptions = Object.entries(options)
.map(
([key, value]) =>
`${breakCamelCase(key).join('-').toLowerCase()}=${typeof value === 'string' || typeof value === 'boolean' ? value : typeof value === 'number' ? Math.round(value) : Object.values(value).join(',')}`,
)
.join('&')
return src + ` + "?${convertedOptions}"`
}
type GroqImageProps = Partial<{
alt: boolean
/** Returns the aspect ratio of the image */
aspectRatio: boolean
/** Precedes asset->url */
assetPath: string
blurDataURL: boolean
/** Returns the coordinates of the crop */
crop: boolean
/** Returns the height of the image */
height: boolean
/** Returns the hotspot of the image */
hotspot: boolean
filename: boolean
otherProps: string[]
src: GroqImageSourceOptions
/** Returns the width of the image */
width: boolean
}>
/**
* # GROQ Image
*
* **Generates the necessary information for extracting the image asset, with built-in and typed options, making it easier to use GROQ's API as it relates to image fetching.**
*
* - Include `alt` and `blurDataURL` whenever possible.
*
* - It's best to always specify the `src` options as well.
*
* - Include either `srcset` or `sources` for best results.
*
* - `srcset` generates URLs for the `srcset` attribute of an `<img>` element.
*
* - `sources` generates URLs for `<source>` elements, used in the `<picture>` element.
*/
export function groqImage(props?: GroqImageProps) {
const prefix = props?.tabStart ? `\n${' '.repeat(props.tabStart)}` : '\n ',
assetPath = props?.assetPath ? `${props.assetPath}.` : ''
let constructor = `{`
if (props?.otherProps) constructor = constructor + prefix + props.otherProps.join(`,${prefix}`) + `,`
if (props?.alt) constructor = constructor + prefix + `"alt": ${assetPath}asset->altText,`
if (props?.crop) {
let crop = 'crop,'
if (props.assetPath) crop = `"crop": ${assetPath}crop,`
constructor = constructor + prefix + crop
}
if (props?.hotspot) {
let hotspot = 'hotspot,'
if (props.assetPath) hotspot = `"hotspot": ${assetPath}hotspot,`
constructor = constructor + prefix + hotspot
}
if (props?.width) constructor = constructor + prefix + `"width": ${assetPath}asset->metadata.dimensions.width,`
if (props?.height) constructor = constructor + prefix + `"height": ${assetPath}asset->metadata.dimensions.height,`
if (props?.aspectRatio)
constructor = constructor + prefix + `"aspectRatio": ${assetPath}asset->metadata.dimensions.aspectRatio,`
if (props?.blurDataURL) constructor = constructor + prefix + `"blurDataURL": ${assetPath}asset->metadata.lqip,`
if (props?.filename) constructor = constructor + prefix + `"filename": ${assetPath}asset->originalFilename,`
constructor = constructor + prefix + `"src": ${assetPath}asset->url`
if (props?.src && Object.entries(props.src).length >= 1) constructor = applySourceOptions(constructor, props.src)
return constructor
}
Although most props are now prepared with groqImage
—like the alt
and blurDataURL
for next/image
—the crop, hotspot, width, and height still aren't utilized. To utilize I created a couple helper functions that are implemented into the main getImagePropsFromSanityForSizing
function.
applyCropToImageSource
calculates the rect
search parameter of the Sanity image URL to apply the crop
based on the image's dimensions.
applyHotspotToImageSource
uses the x
and y
values of the hotspot
for the fx
and fy
focal points defined in the search parameters. It also makes sure the crop
search parameter is set to focalpoint
.
getImagePropsForSizingFromSanity
applies both previously mentioned functions to the src
and calculates the maximum width
and height
attributes based on the actual dimensions of the image in Sanity, compared to the developer-defined max dimensions. If no max width and height are provided, the width
and height
props remain undefined. This is intentional, so that the fill
prop can be properly utilized.
export function applyCropToImageSource(src: string, crop?: SanityCrop, width?: number, height?: number) {
if (!crop || !width || !height) return src
const { top, left, bottom, right } = crop
const croppedWidth = width - right * width,
croppedHeight = height - bottom * height
const rect = `&rect=${Math.round(left)},${Math.round(top)},${Math.round(croppedWidth)},${Math.round(croppedHeight)}`
return src + rect
}
export function applyHotspotToImageSource(src: string, hotspotCoords?: Pick<SanityHotspot, 'x' | 'y'>) {
if (!hotspotCoords) return src
const { x, y } = hotspotCoords
const fx = `&fx=${x}`,
fy = `&fy=${y}`
if (src.includes('&crop=') && !src.includes('&crop=focalpoint')) {
src = src.replace(
/&crop=(top|bottom|left|right|top,left|top,right|bottom,left|bottom,right|center|entropy)/,
'&crop=focalpoint',
)
} else {
src = src + `&crop=focalpoint`
}
if (!Number.isNaN(x) && x <= 1 && x >= 0) src = src + fx
if (!Number.isNaN(y) && y <= 1 && y >= 0) src = src + fy
return src
}
/**
* # Get Image Props for Sizing from Sanity
*
* - Returns src, height, and width for `next/image` component
* - Both sanity and max heights and widths must be included to include height and width props
* - The src will have focalpoints and cropping applied to it, according to the provided crop, hotspot, and dimensions.
*/
export function getImagePropsForSizingFromSanity(
src: string,
{
crop,
height,
hotspot,
width,
}: Partial<{
crop: SanityCrop
height: Partial<{ sanity: number; max: number }>
hotspot: SanityHotspot
width: Partial<{ sanity: number; max: number }>
}>,
): Pick<ImageProps, 'src' | 'height' | 'width'> {
return {
src: applyHotspotToImageSource(applyCropToImageSource(src, crop, width?.sanity, height?.sanity), hotspot),
height: height?.max ? Math.min(height.sanity || Infinity, height.max) : undefined,
width: width?.max ? Math.min(width.sanity || Infinity, width.max) : undefined,
}
}
And lastly, it should be noted that the next.config.ts
is modified to implement a custom loader to take advantage of Sanity's built image pipeline.
// next.config.ts
import type { NextConfig } from 'next'
const nextConfig: NextConfig = {
images: {
formats: ['image/webp'],
loader: 'custom',
loaderFile: './utils/sanity-image-loader.ts',
remotePatterns: [
{
protocol: 'https',
hostname: 'cdn.sanity.io',
pathname: '/images/[project_id]/[dataset]/**',
port: '',
},
],
},
}
export default nextConfig
// sanity-image-loader.ts
// * Image
import { ImageLoaderProps } from 'next/image'
export default function imageLoader({ src, width, quality }: ImageLoaderProps) {
if (src.includes('cdn.sanity.io')) {
const url = new URL(src)
const maxW = Number(url.searchParams.get('max-w'))
url.searchParams.set('w', `${!maxW || width < maxW ? width : maxW}`)
if (quality) url.searchParams.set('q', `${quality}`)
return url.toString()
}
return src
}
Now that we got the boring stuff out of the way, let's talk about how implementation of the hotspot actually works.
The hotspot object is defined like this (in TypeScript):
type SanityHotspot = {
x: number
y: number
width: number
height: number
}
All of these values are numbers 0-1, which means multiplying each value by 100 and adding a %
at the end, will generally be how we will implement the values.
x
and y
are the center of the hotspot. width
and height
are fractions of the dimensions of the image.
Now there are certainly different ways of using these values to get the results you're looking for (e.g. top, left, and/or translate), but I wanted to use the object-position
CSS property, since it doesn't require wrapping the <img>
element in a <div>
and it works well with object-fit: cover;
.
The most important thing to dynamically position the image to keep the hotspot in view is handling resize events. Since I'm using Next.js, I created a React hook to handle this.
I made this hook to return the dimensions of either the specified element, or the window, so it can be used for anything. In our use case, the dimensions of the image is all we care about.
'use client'
import { RefObject, useEffect, useState } from 'react'
export function useResize(el?: RefObject<HTMLElement | null> | HTMLElement) {
const [dimensions, setDimensions] = useState({ width: 0, height: 0 })
const handleResize = () => {
const trackedElement = el ? ('current' in el ? el.current : el) : null
setDimensions({
width: trackedElement ? trackedElement.clientWidth : window.innerWidth,
height: trackedElement ? trackedElement.clientHeight : window.innerWidth,
})
}
useEffect(() => {
if (typeof window !== 'undefined') {
handleResize()
window.addEventListener('resize', handleResize)
}
return () => {
window.removeEventListener('resize', handleResize)
}
}, [])
return dimensions
}
Now that we have our useResize
hook, we can use it and apply the object-position
to dynamically position the image to keep the hotspot in view. Naturally, we'll want to create a new component, so it can be used easily when we need it.
This image component is built off of the next/image
component, since we still want to take advantage of all that that component has to offer.
'use client'
// * Types
import { SanityHotspot } from '@/typings/sanity'
export type ImgProps = ImageProps & { hotspotPositioning?: { aspectRatio?: number; hotspot?: SanityHotspot } }
// * React
import { RefObject, useEffect, useRef, useState } from 'react'
// * Hooks
import { useResize } from '@/hooks/use-resize'
// * Components
import Image, { ImageProps } from 'next/image'
export default function Img({ hotspotPositioning, style, ...props }: ImgProps) {
const imageRef = useRef<HTMLImageElement>(null),
{ objectPosition } = useHotspot({ ...hotspotPositioning, imageRef })
return <Image {...props} ref={imageRef} style={{ ...style, objectPosition }} />
}
Thankfully that part was really simple. I'm sure you noticed we still need to implement this useHotspot
hook that returns the objectPosition
property. First I just wanted to address the changes we made to the ImageProps
from next/image
.
We added a single property to make it as easy as possible to use. The hotspotPositioning
prop optionally accepts both the aspectRatio
and the hotspot
. Both of these are easily pulled in using the groqImage
function.
{ hotspotPositioning?: {
aspectRatio?: number
hotspot?: SanityHotspot
} }
Pitfall
It is possible that the aspectRatio
will not be available if you aren't using the Media plugin for Sanity.
If you do not provide both of these, the hotspot will not be dynamically applied.
Okay—the tough part. How exactly does the useHotspot
hook calculate the coordinates of the objectPosition
property?
By using a useEffect
hook, we are able to update the objectPosition
useState
each time the width
and/or height
of the <img>
element changes. Before actually running any calculations, we always check whether the hotspot
and aspectRatio
are provided, so—although if you know you don't need to dynamically position the hotspot, you shouldn't use this component—it shouldn't hurt performance if you don't have either of those.
The containerAspectRatio
is the aspect ratio of the part of the image that is actually visible. By comparing this to the aspectRatio
, which is the full image, we can know which sides the image is being cropped on by the container.
By default we use the x
and y
coordinates of the hotspot for the objectPosition
, in the case the hotspot isn't being cutoff at all..
Regardless of whether the image is being cropped vertically or horizontally the calculation is basically the same. First, it calculates the aspect ratio of the visible area and it uses the result to determine how far off the overflow is on both sides, in a decimal format (0-1). Next, it calculates how far off—if at all—the hotspot bound overflow. By comparing each respective side's overflow to its hotspot overflowing side counterpart, we are able to determine what direction the objectPosition
needs to move.
It's important to note that objectPosition
does not move the image the same way using top
, left
, or translate
does. Where positive values move the image down and/or right and negative values move the image up and/or left, objectPosition
moves the image within its containing dimensions. This means—assuming we start at 50% 50%
—making the value lower moves the image right or down respectively, and making the value higher moves the image left or up respectively. This is an inverse from the other positioning properties, and objectPosition
doesn't use negative values (at least not for how we want to use it). This is why the calculations are {x or y} ± ({total overflow amount} - {hotspot overflow amount})
.
Lastly, we have the situation where two sides are overflowing. In this case we want to balance how much each side is overflowing to find a middle ground. This is simply 2 * {x or y} - 0.5
.
Once calculations are made, we convert the numbers to a percentage with a min max statement to make sure it never gets inset.
function useHotspot({
aspectRatio,
hotspot,
imageRef,
}: {
aspectRatio?: number
hotspot?: SanityHotspot
imageRef?: RefObject<HTMLImageElement | null>
}) {
const [objectPosition, setObjectPosition] = useState('50% 50%'),
{ width, height } = useResize(imageRef)
useEffect(() => {
if (hotspot && aspectRatio) {
const containerAspectRatio = width / height
const { height: hotspotHeight, width: hotspotWidth, x, y } = hotspot
let positionX = x,
positionY = y
if (containerAspectRatio > aspectRatio) {
// Container is wider than the image (proportionally)
// Image will be fully visible horizontally, but cropped vertically
// Calculate visible height ratio (what portion of the image height is visible)
const visibleHeightRatio = aspectRatio / containerAspectRatio
// Calculate the visible vertical bounds (in normalized coordinates 0-1)
const visibleTop = 0.5 - visibleHeightRatio / 2,
visibleBottom = 0.5 + visibleHeightRatio / 2
const hotspotTop = y - hotspotHeight / 2,
hotspotBottom = y + hotspotHeight / 2
// Hotspot extends above the visible area, shift it down
if (hotspotTop < visibleTop) positionY = y - (visibleTop - hotspotTop)
// Hotspot extends below the visible area, shift it up
if (hotspotBottom > visibleBottom) positionY = y + (hotspotBottom - visibleBottom)
// Hotspot extends above and below the visible area, center it vertically
if (hotspotTop < visibleTop && hotspotBottom > visibleBottom) positionY = 2 * y - 0.5
} else {
// Container is taller than the image (proportionally)
// Image will be fully visible vertically, but cropped horizontally
// Calculate visible width ratio (what portion of the image width is visible)
const visibleWidthRatio = containerAspectRatio / aspectRatio
// Calculate the visible horizontal bounds (in normalized coordinates 0-1)
const visibleLeft = 0.5 - visibleWidthRatio / 2,
visibleRight = 0.5 + visibleWidthRatio / 2
const hotspotLeft = x - hotspotWidth / 2,
hotspotRight = x + hotspotWidth / 2
// Hotspot extends to the left of the visible area, shift it right
if (hotspotLeft < visibleLeft) positionX = x - (visibleLeft - hotspotLeft)
// Hotspot extends to the right of the visible area, shift it left
if (hotspotRight > visibleRight) positionX = x + (hotspotRight - visibleRight)
// Hotspot extends beyond the visible area on both sides, center it
if (hotspotLeft < visibleLeft && hotspotRight > visibleRight) positionX = 2 * x - 0.5
}
positionX = Math.max(0, Math.min(1, positionX))
positionY = Math.max(0, Math.min(1, positionY))
setObjectPosition(`${positionX * 100}% ${positionY * 100}%`)
}
}, [aspectRatio, hotspot, width, height])
return { objectPosition }
}
'use client'
// * Types
import { SanityHotspot } from '@/typings/sanity'
export type ImgProps = ImageProps & { hotspotPositioning?: { aspectRatio?: number; hotspot?: SanityHotspot } }
// * React
import { RefObject, useEffect, useRef, useState } from 'react'
// * Hooks
import { useResize } from '@/hooks/use-resize'
// * Components
import Image, { ImageProps } from 'next/image'
function useHotspot({
aspectRatio,
hotspot,
imageRef,
}: {
aspectRatio?: number
hotspot?: SanityHotspot
imageRef?: RefObject<HTMLImageElement | null>
}) {
const [objectPosition, setObjectPosition] = useState('50% 50%'),
{ width, height } = useResize(imageRef)
useEffect(() => {
if (hotspot && aspectRatio) {
const containerAspectRatio = width / height
const { height: hotspotHeight, width: hotspotWidth, x, y } = hotspot
let positionX = x,
positionY = y
if (containerAspectRatio > aspectRatio) {
// Container is wider than the image (proportionally)
// Image will be fully visible horizontally, but cropped vertically
// Calculate visible height ratio (what portion of the image height is visible)
const visibleHeightRatio = aspectRatio / containerAspectRatio
// Calculate the visible vertical bounds (in normalized coordinates 0-1)
const visibleTop = 0.5 - visibleHeightRatio / 2,
visibleBottom = 0.5 + visibleHeightRatio / 2
const hotspotTop = y - hotspotHeight / 2,
hotspotBottom = y + hotspotHeight / 2
// Hotspot extends above the visible area, shift it down
if (hotspotTop < visibleTop) positionY = y - (visibleTop - hotspotTop)
// Hotspot extends below the visible area, shift it up
if (hotspotBottom > visibleBottom) positionY = y + (hotspotBottom - visibleBottom)
// Hotspot extends above and below the visible area, center it vertically
if (hotspotTop < visibleTop && hotspotBottom > visibleBottom) positionY = 2 * y - 0.5
} else {
// Container is taller than the image (proportionally)
// Image will be fully visible vertically, but cropped horizontally
// Calculate visible width ratio (what portion of the image width is visible)
const visibleWidthRatio = containerAspectRatio / aspectRatio
// Calculate the visible horizontal bounds (in normalized coordinates 0-1)
const visibleLeft = 0.5 - visibleWidthRatio / 2,
visibleRight = 0.5 + visibleWidthRatio / 2
const hotspotLeft = x - hotspotWidth / 2,
hotspotRight = x + hotspotWidth / 2
// Hotspot extends to the left of the visible area, shift it right
if (hotspotLeft < visibleLeft) positionX = x - (visibleLeft - hotspotLeft)
// Hotspot extends to the right of the visible area, shift it left
if (hotspotRight > visibleRight) positionX = x + (hotspotRight - visibleRight)
// Hotspot extends beyond the visible area on both sides, center it
if (hotspotLeft < visibleLeft && hotspotRight > visibleRight) positionX = 2 * x - 0.5
}
positionX = Math.max(0, Math.min(1, positionX))
positionY = Math.max(0, Math.min(1, positionY))
setObjectPosition(`${positionX * 100}% ${positionY * 100}%`)
}
}, [aspectRatio, hotspot, width, height])
return { objectPosition }
}
export default function Img({ hotspotPositioning, style, ...props }: ImgProps) {
const imageRef = useRef<HTMLImageElement>(null),
{ objectPosition } = useHotspot({ ...hotspotPositioning, imageRef })
return <Image {...props} ref={imageRef} style={{ ...style, objectPosition }} />
}
I hope this is helpful for people, as I have been trying to find a solid way to implement this for far too long. If this was helpful to you or you have any recommendations to make it better, please let me know!
please can you help me here I am also developing the application using dwr and spring 6, Java 17 but I'm getting exception engine.js isn't loading.
Getting exception as remote method is undefined my java methods isn't getting called
https://robbelroot.de/blog/csharp-bluetooth-example-searching-listing-devices/ , follow this link. This will help you
Kudos to @Pythoner! You saved my day. I was sure I tried everyting with API Keys lol
from gtts import gTTS
from pydub import AudioSegment
from pydub.playback import play
# Letra para convertir en audio (resumida y adaptada al estilo narrado tipo guía vocal)
lyrics = """
Una le di confianza, me enamoró y en su juego caí.
La segunda vino con lo mismo, me mintió, yo también le mentí.
Por eso es que ya no creo en el amor.
Gracias a todas esas heridas fue que yo aprendí...
Una conmigo jugó, y ahora con to’a yo juego.
En mi corazón no hay amor, no creo en sentimientos.
Soy un cabrón, se las pego a to’as.
Me tiro a esta, me tiro a la otra.
Mala mía, mai, es que me enzorra.
No quiero que más nadie me hable de amor, ya me cansé.
To’ esos trucos ya me los sé, esos dolores los pasé.
Quisiera que te sientas como yo me siento.
Quisiera cambiarle el final a este cuento.
Una conmigo quiso jugar, pues yo jugué con tres.
Una atrevida me quiso enchular, yo enchulé a las tres.
Y ahora no vuelvo a caer, me quedo con las putas y el poder.
Hoy te odio en secreto, si pudiera te devuelvo los besos.
Me arrepiento mil veces de haber confiado en ti.
Los chocolates y las flores, ahora son dolores.
Y después de la lluvia no hay colores.
Una conmigo jugó y ahora con todas yo juego.
En mi corazón no hay amor, tengo el alma en fuego.
Y no me hables de sentimientos, porque eso en mí ya está muerto.
"""
# Convertir texto a voz
tts = gTTS(lyrics, lang='es', slow=False)
audio_path = "/mnt/data/0_Sentimientos_GuiaVoz.mp3"
tts.save(audio_path)
audio_path
Thank you for the interesting information
я тоже изменил свой package.json на то что было указано в терминале после это написал в терминале npx expo i --fix и все заработало)
Qudos!
30 chracters, why?............
Another option for a maintained package for this use-case: https://packagist.org/packages/wikimedia/minify
Thanks for this discussion, I am trying to the same for my application but I have to do this for several images sequentially, So I tried the same but in a for loop, for eg:
for i_ in range(2):
fig, ax = plt.subplots()
# ax.add_artist(ab)
for row in range(1,30):
tolerance = 30 # points
ax.plot(np.arange(0,15,0.5),[i*row/i for i in range(1,15*2+1)], 'ro-', picker=tolerance, zorder=0)
fig.canvas.callbacks.connect('pick_event', on_pick)
klicker = clicker(ax, ["event"], markers=["x"], **{"linestyle": "--"})
plt.draw()
plt.savefig('add_picture_matplotlib_figure_{i_}.png',bbox_inches='tight')
plt.show()
But i get the click functionality only for the last image. How can i get it done for all the images?
what is the js in the first comment before the html
Agree with @Nguyen above- I had this error across Mac and PC, simply restarting the kernel in Jupyter fixed it in both cases.
There is a thread for this bug in Apple Developer Forums: https://developer.apple.com/forums/thread/778471
grep -E '[a-zA-Z]*[[:space:]]foo' <thefilename> | grep -v '?'
+1
I have the same issue (for a arm64 arch) and did not find a solution.
Happens for different IDEs (vscode, cursor, goland) so I assume the issue is with the go & dlv.
I also tried to install go with Homebrew, go website, and gvm. None solved the issue.
Damn it, after I posted it, I Just realized I'm using : , not =. Problem is solved.
For anyone that is searching for this with no luck. Here is the documentation from MS: Share-Types
I am also facing the same issue, and even when I try to install an older version of Swagger, I still face the same problem.
PS C:\Users\LENOVO\OneDrive\Desktop\practical-round> npm i @nestjs/[email protected]
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: @nestjs/[email protected]
npm ERR! node_modules/@nestjs/common
npm ERR! @nestjs/common@"^10.0.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer @nestjs/common@"^9.0.0" from @nestjs/[email protected]
npm ERR! node_modules/@nestjs/swagger
npm ERR! @nestjs/swagger@"6.3.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR!
npm ERR! For a full report see:
npm ERR! C:\Users\LENOVO\AppData\Local\npm-cache_logs\2025-04-08T14_34_57_230Z-eresolve-report.txt
npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\LENOVO\AppData\Local\npm-cache_logs\2025-04-08T14_34_57_230Z-debug-0.log
sql problem, see the solution here
I have the same problem, I couldn't solve it for about two months, but now I found a solution
Unfortunately this is the error I get when trying to run the same command. How are you able to build it? What version of the llvm project are you building?
((1,"c"), (23, "a"), (32,"b"))
same problem here (other tables) but the "kids-table" aren't filtered as expected.
I have tried several attempts and when I run sudo plank
, it works without any issues. However, when I run plank
normally (without sudo
), the problem occurs. Could anyone suggest what kind of permissions or adjustments are needed to make it work without running as root?
Thanks in advance for your help!
I know that each format has its own compression and I know that decompression is long and complicated.
But I would like to do the same thing using libraries that allow conversion to a same single format that is similar to .ppm.
any suggestions?
PS. trying .ppm, it stores RGB values as unsigned
this info is also available via their api without web scraping
Just published an article few days ago: https://stripearmy.medium.com/i-fixed-a-decade-long-ios-safari-problem-0d85f76caec0
And the npm package: https://www.npmjs.com/package/react-ios-scroll-lock
Hope this fixes your problem.
Someone help me with the script where it will give first level second level and thrid approval details configrued in the access policy
I am experiencing the same problem!
can´t make it work. ive tried other options but they never put qty, just one product. yours just come with error and i cant see de qty field. any sugestions?
It is also not working for me!
Maybe a mistake in the hook?
I faced a similar problem earlier. Try to see the solution in this question: How to stretch the DropdownMenu width to the full width of the screen?
@Raja Talha Do you find the Solution to this
it works just fine and gave me my exact location good job!
Can someone please guide me on how to convert a PyTorch .ckpt
model to a Hugging Face-supported format so that I can use it with pre-trained models?
The model I'm trying to convert was trained using PyTorch Lightning, and you can find it here:
🔗 hydroxai/pii_model_longtransfomer_version
I need to use this model with the following GitHub repository for testing:
🔗 HydroXai/pii-masker
I tried using Hugging Face Spaces to convert the model to .safetensors
format. However, the resulting model produces poor results and triggers several warnings.
These are the warnings I'm seeing:
Some weights of the model checkpoint at /content/pii-masker/pii-masker/output_model/deberta3base_1024 were not used when initializing DebertaV2ForTokenClassification: ['deberta.head.lstm.bias_hh_l0', 'deberta.head.lstm.bias_hh_l0_reverse', 'deberta.head.lstm.bias_ih_l0', 'deberta.head.lstm.bias_ih_l0_reverse', 'deberta.head.lstm.weight_hh_l0', 'deberta.head.lstm.weight_hh_l0_reverse', 'deberta.head.lstm.weight_ih_l0', 'deberta.head.lstm.weight_ih_l0_reverse', 'deberta.output.bias', 'deberta.output.weight', 'deberta.transformers_model.embeddings.LayerNorm.bias', 'deberta.transformers_model.embeddings.LayerNorm.weight', 'deberta.transformers_model.embeddings.token_type_embeddings.weight', 'deberta.transformers_model.embeddings.word_embeddings.weight', 'deberta.transformers_model.encoder.layer.0.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.0.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.0.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.0.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.0.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.0.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.0.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.0.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.0.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.0.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.0.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.0.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.0.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.0.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.0.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.0.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.0.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.0.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.0.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.0.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.0.output.dense.bias', 'deberta.transformers_model.encoder.layer.0.output.dense.weight', 'deberta.transformers_model.encoder.layer.1.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.1.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.1.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.1.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.1.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.1.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.1.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.1.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.1.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.1.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.1.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.1.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.1.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.1.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.1.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.1.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.1.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.1.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.1.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.1.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.1.output.dense.bias', 'deberta.transformers_model.encoder.layer.1.output.dense.weight', 'deberta.transformers_model.encoder.layer.10.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.10.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.10.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.10.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.10.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.10.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.10.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.10.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.10.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.10.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.10.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.10.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.10.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.10.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.10.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.10.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.10.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.10.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.10.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.10.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.10.output.dense.bias', 'deberta.transformers_model.encoder.layer.10.output.dense.weight', 'deberta.transformers_model.encoder.layer.11.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.11.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.11.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.11.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.11.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.11.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.11.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.11.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.11.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.11.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.11.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.11.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.11.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.11.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.11.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.11.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.11.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.11.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.11.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.11.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.11.output.dense.bias', 'deberta.transformers_model.encoder.layer.11.output.dense.weight', 'deberta.transformers_model.encoder.layer.2.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.2.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.2.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.2.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.2.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.2.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.2.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.2.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.2.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.2.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.2.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.2.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.2.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.2.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.2.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.2.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.2.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.2.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.2.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.2.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.2.output.dense.bias', 'deberta.transformers_model.encoder.layer.2.output.dense.weight', 'deberta.transformers_model.encoder.layer.3.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.3.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.3.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.3.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.3.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.3.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.3.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.3.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.3.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.3.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.3.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.3.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.3.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.3.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.3.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.3.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.3.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.3.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.3.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.3.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.3.output.dense.bias', 'deberta.transformers_model.encoder.layer.3.output.dense.weight', 'deberta.transformers_model.encoder.layer.4.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.4.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.4.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.4.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.4.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.4.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.4.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.4.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.4.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.4.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.4.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.4.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.4.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.4.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.4.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.4.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.4.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.4.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.4.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.4.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.4.output.dense.bias', 'deberta.transformers_model.encoder.layer.4.output.dense.weight', 'deberta.transformers_model.encoder.layer.5.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.5.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.5.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.5.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.5.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.5.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.5.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.5.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.5.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.5.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.5.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.5.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.5.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.5.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.5.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.5.attention.self.value_global.weight', 'deberta.transformers_model.encoder.layer.5.intermediate.dense.bias', 'deberta.transformers_model.encoder.layer.5.intermediate.dense.weight', 'deberta.transformers_model.encoder.layer.5.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.5.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.5.output.dense.bias', 'deberta.transformers_model.encoder.layer.5.output.dense.weight', 'deberta.transformers_model.encoder.layer.6.attention.output.LayerNorm.bias', 'deberta.transformers_model.encoder.layer.6.attention.output.LayerNorm.weight', 'deberta.transformers_model.encoder.layer.6.attention.output.dense.bias', 'deberta.transformers_model.encoder.layer.6.attention.output.dense.weight', 'deberta.transformers_model.encoder.layer.6.attention.self.key.bias', 'deberta.transformers_model.encoder.layer.6.attention.self.key.weight', 'deberta.transformers_model.encoder.layer.6.attention.self.key_global.bias', 'deberta.transformers_model.encoder.layer.6.attention.self.key_global.weight', 'deberta.transformers_model.encoder.layer.6.attention.self.query.bias', 'deberta.transformers_model.encoder.layer.6.attention.self.query.weight', 'deberta.transformers_model.encoder.layer.6.attention.self.query_global.bias', 'deberta.transformers_model.encoder.layer.6.attention.self.query_global.weight', 'deberta.transformers_model.encoder.layer.6.attention.self.value.bias', 'deberta.transformers_model.encoder.layer.6.attention.self.value.weight', 'deberta.transformers_model.encoder.layer.6.attention.self.value_global.bias', 'deberta.transformers_model.encoder.layer.6.attention.self.............'deberta.encoder.layer.9.output.dense.bias', 'deberta.encoder.layer.9.output.dense.weight', 'deberta.encoder.rel_embeddings.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
if you struggle to resolve the problem with python libs. Check this article. it helped me a lot. https://aws.plainenglish.io/easiest-way-to-create-lambda-layers-with-the-required-python-version-d205f59d51f6
how can i make noreply mail.Is it enough for me ?
data['h:Reply-To']=""
Add the folder in which you stored the "my-project-env" to the VSCode workspace.
I have the same problem. I couldn't install the solution. I think the problem may not be in the code. If you find the solution, I would be very happy if you share it.
I'm not sure what it is but it's not working for me either. The link Eric gave redirects to support. I created all relevent id's ibm cloud, ibm etc.. but nothing is working.
Have you found a solution to this problem?
Use Application Like pdAdmin or DBeaver
My answer is only to refute the answer with the first number of votes, because I can't vote or reply. I followed his instructions and added "source ~/.bash_profile" at the beginning of my ~/.zshrc. Then I executed "source ~/.zshrc" and it gave an error, "-bash: export: `': not a valid identifier". At this time,any "sudo" and "vim" command will not work at all.And my "~/.bashrc" content will be replace by 'eval "$(thefuck --alias)"'.My command config went missing!I could only delete that line "source ~/.bash_profile" and execute "echo $PATH" to check my PATH. I found that "/usr/bin" and "/bin" were missing, which made my basic commands completely invalid. Then I executed "export PATH=$PATH:/usr/bin:/bin" to fix it. Don't try that method lightly!
did you resolve this issue? I have working on this issue for days but have no resolutions yet...
Here is my result
04-08 16:44:30 I/TestInvocation: Starting invocation for 'cts' with '[ DeviceBuildInfo{bid=eng.anqizh, serial=a0f32ff5} on device 'a0f32ff5']
04-08 16:44:31 E/TestInvocation: Caught exception while running invocation
04-08 16:44:31 E/TestInvocation: Trying to access android partner remote server over internet but failed: Unsupported or unrecognized SSL message
com.android.tradefed.targetprep.TargetSetupError[ANDROID_PARTNER_SERVER_ERROR|500505|DEPENDENCY_ISSUE]: Trying to access android partner remote server over internet but failed: Unsupported or unrecognized SSL message
at com.android.compatibility.common.tradefed.targetprep.DynamicConfigPusher.resolveUrl(DynamicConfigPusher.java:318)
at com.android.compatibility.common.tradefed.targetprep.DynamicConfigPusher.setUp(DynamicConfigPusher.java:172)
at com.android.tradefed.invoker.InvocationExecution.runPreparationOnDevice(InvocationExecution.java:621)
at com.android.tradefed.invoker.InvocationExecution.runPreparersSetup(InvocationExecution.java:522)
at com.android.tradefed.invoker.InvocationExecution.doSetup(InvocationExecution.java:375)
at com.android.tradefed.invoker.TestInvocation.prepareAndRun(TestInvocation.java:624)
at com.android.tradefed.invoker.TestInvocation.performInvocation(TestInvocation.java:291)
at com.android.tradefed.invoker.TestInvocation.invoke(TestInvocation.java:1431)
at com.android.tradefed.command.CommandScheduler$InvocationThread.run(CommandScheduler.java:692)
Caused by: javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
at java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:462)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:175)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1506)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1421)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:455)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:426)
at java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:586)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1675)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1599)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:223)
at java.base/java.net.URL.openStream(URL.java:1325)
at com.android.compatibility.common.tradefed.targetprep.DynamicConfigPusher.resolveUrl(DynamicConfigPusher.java:315)
... 8 more
04-08 16:44:31 E/ClearcutClient: Unsupported or unrecognized SSL message
javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
at java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:462)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:175)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1506)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1421)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:455)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:426)
at java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:586)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1446)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1417)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:219)
at com.android.tradefed.clearcut.ClearcutClient.sendToClearcut(ClearcutClient.java:344)
at com.android.tradefed.clearcut.ClearcutClient.lambda$flushEvents$1(ClearcutClient.java:322)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1760)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
04-08 16:44:31 W/NativeDevice: Attempting to stop logcat when not capturing for a0f32ff5
Same question, any more findings so far? Thanks in advance.
Sorry i can't post comment cause i don't have 25 reputation so i post an answer.
But your result should be SemOrd=10
for UserId = 1
and SubId = 706
?
Kindly provide the code as reference for better understanding of the problem.
guys if we import math and take to def functions do we have to import it twice?
Since scripts will be sunset, I recommend starting your developments directly with RedApp.
You can check the link below — feel free to reach out if you need any help.
https://developer.sabre.com/sdk/sabre-red-360/25.1/help-documentation/home.html
To help us diagnose the problem, we need a bit more information:
- More Detailed Scenario - Please describe exactly what type of search or which part of the workbench you're using when the issue occurs? The search function appears in multiple places, so a precise description will help.
- Browser Console - Are there any errors or warnings in the Firefox console when the delay happens?
- Setup Details - You've mentioned that you're using GraphDB version 10.8.4 on a self-hosted server. Can you confirm if you're using the free version or a licensed one?
- Data Characteristics - It would also be useful to know the approximate volume and nature of the data in your database.
With these details, we can better investigate your problem.
Best regards,
Stilyana
14 years later, it seems Outlook still doesn't reconize it. Is it limited to Apple iCalendar ?
I know it's a 11 year old topic, but, I just switched to js and webstorm. I'm wondering if anyone knows if I can set project explorer to automatically expand src
directory once I expand a module?
I have a same problem. Did you resolve the problem?
could you please provide more details about the specific modifications you made? I am encountering the same issue and would appreciate your guidance.
I am having the same error:
[Error: Failed to collect configuration for /_not-found]
Later, I found out that my .env file was missing a variable. Adding that environment variable solved this build error.
Also, try deleting the ".next" folder if you are self-hosting your project.
How did you find the solution to this error? Like, based on the screenshot provided, how was the error identified and solved by looking at the package natively? Can you guide me through the process? @Arjun Singh
I have on premise, oracle 21c EE on Windows 10, receiving same error " Database Connection Error HTTP Status Code: 571 " I am trying to search for solutions but still nothing worked, please help.
¿Podrías explicar a qué te refieres cuando indicas que className
se oculta en todos los componentes?
Tu planteamiento inicial resulta muy vago, por lo que sería conveniente que ampliaras tu pregunta.
Es muy difícil brindar ayuda sin contar con la información mínima necesaria. Agradecería que describieras en detalle tu problema y lo que esperas que suceda para considerar tu código como correcto.
I was facing the same error , downgrading the version pip install --force-reinstall uvicorn<0.24 helped me. Thank you @QuimPuiggalí
Can I ask you if are you calling downloadEvfData into a loop to have refreshed images or this is your complete solution for real time streaming? Because I need to download a liveviewimage continuously in background but using a while loop into a thread causes EdsDownloadEvfImage to crush without errors; the code just stops exiting from the loop.Thank you in advance
Can someone tell me what is this? I remember downloading it on my phone.
load_files.html
<div class='err box_link'>Авторизуйтесь для доступа</div>
Though I am too late, I hope someone will find this helpful.
You can find the clear steps in this article.
https://medium.com/@sp96.info/deploying-vue-js-app-to-firebase-hosting-0d4351714e4c
do you have the dataset?
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("you dataset here")
sns.scatterplot(data=df, x="Age", y="Sales")
plt.title("Relationship between Age and Car Toy Sales")
plt.show()
Yes. it's work. Replace With > Local History it'd resolved my problem.
Big thank you for the good advice.
This happens because the QB desktop uses auto ref number which has go be switched off for manually set txnunbers.
Which Soap are you using?
I have the same issue. The issue is related to https://github.com/huggingface/transformers/pull/37337
In my case, installing accelerate
fix the issue, as the workaround.
pip install accelerate
Were you able to find a fix? One way could be to save the dates/ time in string format. I am in the middle of a bug fix though, and saving it in string format would mean that all the previous use cases would fail.
you are talking rocket science, steve is not proud
New templates and plugins for Sponzy on https://bbest.live/shop
Since you have mentioned voro++ on top of your current options, it seems logic to think that if you could use voro++ in MATLAB you could readily fox the problem at hand.
Good news ! Some one ahead of you has posted in Github the MEX libraries for voro++ .
https://github.com/smr29git/MATLAB-Voro
Please give a go and let us know.
was there ever a fix found for this? Our test email sent in dev show this behavior but not the tests from prod, ie with the prod tests when 'view entire message' is clicked the css is carried over, but not from dev. The esp we are using is SailThru
I have ran into the same problem and seeing very similar training logs to you when using a multi-discrete action space but the evaluation is not good. Did you ever find a solution?
Go to this link, And download all the models under `ComfyUI/models` into your models.
Issue - You might be using the VM and because of this, internet access is blocked.
Reference - https://github.com/chflame163/ComfyUI_LayerStyle_Advance?tab=readme-ov-file#download-model-files
Have you tried giving up on this assignment? worked for me
Anyone please answer to this question. It is required in my university assignment. Please help. ASAP.
I have the same issue, did you ever resolve this?
I got the solution please change gradel dependency
please replace with
implementation 'com.github.smarteist:Android-Image-Slider:1.4.0'
it's working for my project. I hope it will work for your project.
if have any issue please let me know. Thanks
Did you ever get that figured out?
have you found a solution yet? I've been struggling with this issue myself for the past two days
Counterintuitively, removing "Codeable" conformance from the @Model protocol conformance list eliminates the error.
The Macro expansion is the issue.
See: "Cannot Synthesize" -- Why is this class not ready to be declared "@Model" for use with SwiftData?
I also am looking for an answer to this.
did you get the answer of opening the parent app directly from shield action extension?
DOH! I was checking an empty table by accident. My bad!