Air (n = 1.00)
------------------- ← ← ← ← ← ← ← ← ← ← ← ← ← ← ←
|
|
| θ₁ = 0°
|
Water (n = 1.33)
|
------------------------- ← ← ← ← ← ← ← ← ← ← ← ← ←
Pool Surface (where light exits)
|
|
↓
Light ray traveling straight up (along the normal)
Xcode 16 has slightly changed the UI for adding files/folders.
The first checkbox in older versions, "Destination: Copy items if needed" is changed into a drop down list. Copy = checked; reference in place = unchecked; move is a newly added action that equals to copy then delete in the older version.
The "Add folders" radio buttons group is changed into a drop down list as well. It only appears when you are adding a folder from disk. The options remain the same.
To debug file path related issues, one way is to use Xcode > Product > Show Build Folder In Finder, and reveal the content in the app bundle to see how the folders are actually structured in the bundle.
In short, "groups" are logic groupings for the project files only, and all the files are laid flat in the root level of the bundle; "folder references" are actual folders created in the bundle so the files are hierarchical.
After a long time I discovered the problem by myself. This invalid form error occurs because normally, on local machines, the protocol is http instead of https. So just change the Magento settings to not use secure URLs in both the store and the admin.
You can find these settings in:
Change the following values to no:
1 - Use Secure URLs in the Store.
2 - Use Secure URLs in the Administration Section.
When you call your function searchHandler() . Before calling just check a if() condition . create a global variable example = searchterm. if(searchterm) { then only call the function searchHandler() } OR if you want to do it inside searchHandler() function then just put all your code inside if condition. if(keyword) { then rest of the logic }. I hope you understand. Put a upvote if you solve it.
You can replace that URL with this one:
https://www.google.com/maps/vt/icon/name=assets/icons/spotlight/spotlight_pin_v4_outline-2-medium.png,assets/icons/spotlight/spotlight_pin_v4-2-medium.png,assets/icons/spotlight/spotlight_pin_v4_dot-2-medium.png&highlight=c5221f,ea4335,b31412?scale=1
By adding revision tags you can give a revision a dedicated url:
gcloud run deploy ..... --no-traffic --tag staging
This revision then gets deployed with a url of https://staging--{original-url.run.app}
and gets none of the traffic sent to the normal url.
Using np.frexp(arr)[1]
comes in 4 to 6x faster than np.ceil(np.log2(x)).astype(int)
.
Note that, as pointed out by @GregoryMorse above, some additional work is needed to guarantee correct results for 64-bit inputs (bit_length3
below).
import numpy as np
def bit_length1(arr):
# assert arr.max() < (1<<53)
return np.ceil(np.log2(arr)).astype(int)
def bit_length2(arr):
# assert arr.max() < (1<<53)
return np.frexp(arr)[1]
def bit_length3(arr): # 64-bit safe
_, high_exp = np.frexp(arr >> 32)
_, low_exp = np.frexp(arr & 0xFFFFFFFF)
return np.where(high_exp, high_exp + 32, low_exp)
Performance results, 10 iterations on a 100,000-element array via https://perfpy.com/868.
my branch is pushing in to the main branch and i can create pull request and then merged on git hub but when i come back on bash and switch to main branch and do pull request. it wont let me to do it. it said its not streamed. what will be the reason.
This appears to be a situation where the newer version of the nuget package puts files in a different place when they don't have to exist in source control. It's frustrating but the solution is to exclude them from source control and then when you restore packages they will be restored to the solution appropriately.
Could you please let me know what version dependency should we use for these maven dependencies This is the below error oi see
jakarta.servlet.ServletException: Handler dispatch failed: java.lang.NoClassDefFoundError: org/apache/hive/service/cli/HiveSQLException at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1096) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:974) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1011) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:903) ~[object-store-client-8.2.0.jar:8.2.0] at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:564) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) ~[object-store-client-8.2.0.jar:8.2.0] at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658) ~[object-store-client-8.2.0.jar:8.2.0] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:205) ~[object-store-client-8.2.0.jar:8.2.0] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[object-store-client-8.2.0.jar:8.2.0] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) ~[object-store-client-8.2.0.jar:8.2.0] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[object-store-client-8.2.0.jar:8.2.0] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.boot.actuate.web.exchanges.servlet.HttpExchangesFilter.doFilterInternal(HttpExchangesFilter.java:89) ~[spring-boot-actuator-3.1.4.jar:3.1.4] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.12.jar:6.0.12] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[object-store-client-8.2.0.jar:8.2.0] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:117) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) ~[object-store-client-8.2.0.jar:8.2.0] at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) ~[object-store-client-8.2.0.jar:8.2.0]
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.thrift</groupId>
<artifactId>libthrift</artifactId>
<version>0.14.2</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-metastore</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-service-rpc</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-llap-server</artifactId>
<version>4.0.0</version>
<exclusions>
<exclusion>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-hplsql</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.3.1</version> <!-- Or compatible version -->
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>com.fasterxml.woodstox</groupId>
<artifactId>woodstox-core</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.codehaus.woodstox</groupId>
<artifactId>stax2-api</artifactId>
<version>4.2.1</version>
</dependency>
You can use ready-made styles in this site https://oaved.github.io/article-preview-component/
just press F12 to open DevTools "Elements" tab. You can see all the styles on this page.
Matching a blob source to an M3U8 URL can be challenging due to the nature of how media is streamed. Here's a brief explanation:
Blob URLs are object URLs used to represent file data in the browser. They are generated for client-side access and do not point to external resources directly. Typically, a blob URL is created from local files or downloaded data, such as video files stored in the browser’s memory.
M3U8 URLs are playlist files used to stream media content over HTTP. They are specifically used in HTTP Live Streaming (HLS) and represent a sequence of small media files. These URLs are more permanent and can often be traced back to an origin server.
Matching a blob source to an M3U8 URL requires insight into the streaming process:
If the blob URL is derived from an M3U8 stream, you may be able to match the content by analyzing the source network requests in the browser's developer tools. When streaming, check for the network activity that loads the M3U8 file. This can give clues if the blob content originates from a specific M3U8 stream. Tools like FFmpeg or browser extensions can also help to capture and analyze streaming media sources. In summary, while it's not always straightforward to correlate a blob source directly with an M3U8 URL, inspecting network requests during streaming and using diagnostic tools can aid in understanding the connection between the two.
Expanding on the original question, you may ask what to do if the last received data is outdated.
IMO, the data you operate with should have some lastUpdated
field with a timestamp set on the backend side.
Then, on the client, you would probably still want to merge the fetched results with real-time results resolving the duplicates with the freshest data according to the lastUpdated
timestamp.
My first question is, why can I still use the main thread if the code after is never executed ? Shouldn't the main thread be blocked there waiting for it's completion ?
That's not how Swift Concurrency works.
The whole concept / idea behind Swift Concurrency is that the current thread is not blocked by an await
call.
To put it briefly and very simply, you can imagine an asynchronous operation as a piece of work that is processed internally on threads. However, while waiting for an operation, other operations can be executed because you're just creating a suspension point. The whole management and processing of async operations is handled internally by Swift.
In general, with Swift Concurrency you should refrain from thinking in terms of “threads”, these are managed internally and the thread on which an operation is executed is deliberately not visible to the outside world.
In fact, with Swift Concurrency you are not even allowed to block threads without further ado, but that's another topic.
If you want to learn more details about async/await and the concepts implemented by Swift, I recommend reading SE-0296 or watching one of the many WWDC videos Apple has published on this topic.
My second question is, how does the system know that the completion will never be called ? Does it track if the completion is retained in the external lib at runtime, if the owner of the reference dies, it raises this leak warning ?
See the official documentation:
Missing to invoke it (eventually) will cause the calling task to remain suspended indefinitely which will result in the task “hanging” as well as being leaked with no possibility to destroy it.
The checked continuation offers detection of mis-use, and dropping the last reference to it, without having resumed it will trigger a warning. Resuming a continuation twice is also diagnosed and will cause a crash.
For the rest of your questions, I assume that you have shown us all the relevant parts of the code.
My third question is, could this lead to a crash ? Or the system just cancel the Task and I shouldn't worry about it ?
Only multiple calls to a continuation would lead to a crash (see my previous answer). However, you should definitely make sure that the continuation is called, otherwise you will create a suspension point that will never be resolved. Think of it like an operation that is never completed and thus causes a leak.
And my last question is, what can I do if I cannot modify the external lib ?
According to the code you have shown us, there is actually only one possibility:
Calling doSomething
multiple times causes calls to the same method that are still running to be canceled internally by the library and therefore the completion closures are never called.
You should therefore check the documentation of doSomething
to see what it says about multiple calls and cancelations.
In terms of what you could do if the library doesn't give you a way to detect cancelations:
Here is a very simple code example that should demonstrate how you can solve the problem for this case:
private var pendingContinuation: (UUID, CheckedContinuation<Void, any Error>)?
func callExternalLib() async throws {
if let (_, continuation) = pendingContinuation {
print("Cancelling pending continuation")
continuation.resume(throwing: CancellationError())
self.pendingContinuation = nil
}
try await withCheckedThrowingContinuation { continuation in
let continuationID = UUID()
pendingContinuation = (continuationID, continuation)
myExternalLib.doSomething {
Task { @MainActor in
if let (id, continuation) = self.pendingContinuation, id == continuationID {
self.pendingContinuation = nil
continuation.resume()
}
}
} error: { error in
Task { @MainActor in
if let (id, continuation) = self.pendingContinuation, id == continuationID {
self.pendingContinuation = nil
continuation.resume(throwing: error)
}
}
}
}
}
Note that this solution assumes that there are no other scenarios in which doSomething
never calls its completion handlers.
In my case, it happens when I return null inside the Storybook Template. It works with <></>
.
Works:
if (!ready) {
return <></>
}
Doesn't work
if (!ready) {
return null
}
I am having the problem too, but the GitHub repositories are public and we need non organization members to connect to them via gh api. Will not work if they have a PAT.
This issue might be already resolved here:
I just renamed from D:\Software\kafka_2.13-3.8.1> (giving error) to D:\Software\Kafka_3.8.1> and it worked. [windows 11] No folder change. No drive change.
if anyone is still confused how to:
create res/layout/lb_playback_fragment.xml
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/playback_fragment_root"
android:layout_width="match_parent"
android:transitionGroup="false"
android:layout_height="match_parent">
<com.phonegap.voyo.utils.NonOverlappingFrameLayout
android:id="@+id/playback_fragment_background"
android:transitionGroup="false"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<com.phonegap.voyo.utils.NonOverlappingFrameLayout
android:id="@+id/playback_controls_dock"
android:transitionGroup="true"
android:layout_height="match_parent"
android:layout_width="match_parent"/>
<androidx.media3.ui.SubtitleView
android:id="@+id/exoSubtitles"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="bottom|center_horizontal"
android:layout_marginBottom="32dp"
android:layout_marginLeft="16dp"
android:layout_marginRight="16dp" />
<androidx.media3.ui.AspectRatioFrameLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_gravity="center">
<androidx.media3.ui.SubtitleView
android:id="@+id/leanback_subtitles"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.media3.ui.AspectRatioFrameLayout>
create class NonOverlappingFrameLayout.java
package com.phonegap.voyo.utils;
import android.content.Context; import android.util.AttributeSet; import android.widget.FrameLayout; public class NonOverlappingFrameLayout extends FrameLayout { public NonOverlappingFrameLayout(Context context) { this(context, null); } public NonOverlappingFrameLayout(Context context, AttributeSet attrs) { super(context, attrs, 0); } public NonOverlappingFrameLayout(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } /** * Avoid creating hardware layer when Transition is animating alpha. */ @Override public boolean hasOverlappingRendering() { return false; } }
inside PlayerFragment
subtitleView = requireActivity().findViewById(R.id.leanback_subtitles)
player?.addListener(object:Player.Listener{ @Deprecated("Deprecated in Java") @Suppress("DEPRECATION") override fun onCues(cues: MutableList) { super.onCues(cues) subtitleView?.setCues(cues) } })
I've got the same problem after updating my Android Studio to Ladybug (yes, bug)
Try to
use icons_launcher instade
dev_dependencies:
flutter_test:
sdk: flutter
icons_launcher: ^3.0.0
icons_launcher:
image_path: "assets/icon.png"
platforms:
android:
enable: true
ios:
enable: true
A1111 doesnt support flux, you need to use Forge UI, Reforge UI or Comfy UI for Flux support, read up on the documentation on civit.ai for a detailed explanation plus the additional VAE and text encoders that are missing
You might try devtools to install it from https://github.com/jverzani/gWidgets2RGtk2. I think the issue is not this package, but rather the RGTk2 package but that may just be platform dependent, so may still work for you.
To achieve this, you need to utilize the URL rewrites of the load balancer. This will allow you to change host, path, or both of them before directing traffic to your backend services.
Assuming everything is already set up, you just need to edit the routing rules of your load balancer.
Edit the load balancer and select Routing Rules.
In the mode section, select “Advanced host and path rules”.
On the YAML file, create a “rouetRules” with “matchRules” for your URL request that will distinguish your backend service.
Indicate your desired path by creating “urlRewrite” with “pathPrefixRewrite”.
Sample YAML code below:
name: matcher
routeRules:
- description: service
matchRules:
- prefixMatch: /bar # Request path that need to be rewrite.
priority: 1
service: projects/example-project/global/backendServices/<your-backend>
routeAction:
urlRewrite:
pathPrefixRewrite: /foo/bar # This will be your desired path.
For a more comprehensive guide, you can follow this article.
Actually this happened because of visual studio based error. I tried on different computer and it is working correctly. I assume the visual stuido did not installed properly. After I checked Event Viewer, I saw an MSVCP140.dll error.
how did you solve this problem ?
SELECT * FROM your_table WHERE field1 NOT LIKE CONCAT('%', field2, '%');
I would like to add that you can also use TextRenderer.MeasureText()
to help adjust the size of the controls that refuse to scale properly.
I'm trying to update to VS2022 but I did all the steps up and this error continue:
Severity Code Description Project File Line Suppression State Details Error LNK2019 unresolved external symbol _MainTask referenced in function __Thread@4 UI_SIM C:\EVO_WIN32\jura_evohome_main_app\GUISim.lib(SIM_GUI_App.OBJ)
Do you know how to solve this?
didnt you find any solution ???
For anyone who may run into this same issue and is looking for how to solve it, here's the fix.
spring:
cloud:
function:
definition: myPayloadConsumerFromTopic1;myPayloadConsumerFromTopic2
Note that previously I was using commas to separate the function definitions, whereas now I am using semicolons. That fixed this issue.
To Monitor DB behaviour using SQL queries the DB user should have the SELECT ANY DICTIONARY privilege if you‘re not using a system account. Keep in mind that selecting from some views like AWR views (DBA_HIST_xxx) requires the Enterprise Edition with Diagnostics Pack rsp. Tuning Pack license.
To learn how to select several states of the DB by SQL you may use the free analysis tool „Panorama for Oracle“. Setting the environment variable PANORAMA_LOG_SQL=true before starting the app will log all SQL statements to the console that are executed while using the browser GUI of this app.
Since Rust hasn’t yet solved these problems, I’ve extended a better solution (without closure, so no need to shut up clippy) and published it.
Following the dagger documentation here
What you can do is using the @AssistedInject annotation and create a factory to be used.
class CustomClass @AssisgtedInject constructor (
val repository: Repository,
@Assisted val name: String
) {
@AssistedFactory
interface CustomClassFactory {
fun create(name: String): CustomClass
}
}
Then you can have the @Inject CustomClassFactory and call the CustomClassFactory.create() to create your object.
It has been found that an output directory can be specified when running using maven and specifying a system property when invoking maven. The system property is karate.output.dir. For example:
mvn test -Dkarate.env="QA" -Dkarate.options="classpath:ui/test-to-run.feature" -Dkarate.output.dir="karate-custom-output-dir"
These are 3 alternative ways to avoid DST handling:
Use an aware datetime (with tzinfo=UTC) instead of naive:
>>> before_utc = before.replace(tzinfo=timezone.utc)
>>> after_utc = after.replace(tzinfo=timezone.utc)
>>> after_utc.timestamp() - before_utc.timestamp()
3600.0
>>> after_utc - before_utc
datetime.timedelta(seconds=3600)
The following 2 alternatives continue using naive datetime, as in the OP. In the context of the datetime.timestamp()
method naive means local time and is delegated to the platform function mktime()
(as shown in the @anentropic answer).
Set the environment variable TZ=Etc/Utc
Debian specific.
Change the content of /etc/timezone
.
In my system it contains Europe/Madrid
.
An interactive way to change it is through the dpkg-reconfigure tzdata
command
If several of those methods are used at the same time, the order of preference is the same in which they are listed.
maybe you could accomplish this with the infinitely confusing "condition" feature: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-auth-abac-attributes#list-blobs
https://www.alitajran.com/conditional-access-mfa-breaks-azure-ad-connect-synchronization/
Synchronization Service Manager Sign in on the Microsoft Entra Connect server. Start the application Synchronization Service Manager. Look at the start and end times.
In the screenshot below, the start time and end time are 4/11/2021. Today is 4/19/2021. It’s been more than a week that Azure AD Connect synced.
Conditional Access MFA breaks Azure AD Connect synchronization before Microsoft 365 admin center Sign in to Microsoft 365 admin center. Check the User management card.
We can confirm that the Azure AD Connect last sync status was more than three days ago, and there is no recent password synchronization happening.
Conditional Access MFA breaks Azure AD Connect synchronization not syncing Azure AD Connect synchronization error Run Windows PowerShell as administrator. Run a force sync Microsoft Entra Connect with PowerShell. It will show the error below.
PS C:> Import-Module ADSync PS C:> Start-ADSyncSyncCycle -PolicyType Delta Start-ADSyncSyncCycle : System.Management.Automation.CmdletInvocationException: System.InvalidOperationException: Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application. Start-ADSyncSyncCycle : System.Management.Automation.CmdletInvocationException: System.InvalidOperationException: Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application.
The screen below shows how it looks after running the AD Sync command.
Conditional Access MFA breaks Azure AD Connect synchronization errors Event Viewer application events Start Event Viewer. Go to Windows Logs > Application. The following Error events show up:
Event 662, Directory Synchronization Event 6900, ADSync Event 655, Directory Synchronization Event ID 906, Directory Synchronization Click on Event ID 906.
Conditional Access MFA breaks Azure AD Connect synchronization Event Viewer errors Event 906, Directory Synchronization GetSecurityToken: unable to retrieve a security token for the provisioning web service (AWS). The ADSync service is not allowed to interact with the desktop to authenticate [email protected]. This error may occur if multifactor or other interactive authentication policies are accidentally enabled for the synchronization account.
Solution for AD Connect synchronization failing The solution for AD Connect synchronization breaking after implementing Azure AD MFA is to exclude the Azure AD Connect Sync Account from Azure AD MFA.
Service accounts, such as the Azure AD Connect Sync Account, are non-interactive accounts that are not tied to any particular user. They are usually used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can’t be completed programmatically.
Find Azure AD synchronization account In the event log error, which we looked at in the previous step, you can copy the account you need to exclude from Azure MFA.
If you want to check the account in Synchronization Service Manager, click on Connectors. Click the type Windows Azure Active Directory (Microsoft). Click Properties.
Conditional Access MFA breaks Azure AD Connect synchronization Connectors Click Connectivity and find the UserName.
Conditional Access MFA breaks Azure AD Connect synchronization Azure AD Connect Sync Account Read more: Find Microsoft Entra Connect accounts »
Exclude MFA for Azure AD Connect Sync Account Sign in to Microsoft Azure. Open the menu and browse to Azure Active Directory > Security > Conditional Access. Edit the Conditional Access policy that’s enforcing MFA for the user accounts.
In this example, it’s the policy MFA all users.
Read more: How to Configure Microsoft Entra Multi-Factor Authentication »
Conditional Access MFA breaks Azure AD Connect synchronization Conditional Access policy Under Assignments, click Users and groups and select Exclude. Check the checkbox Users and groups. Find the synchronization account that you copied in the previous step. Ensure that the policy is On and click on Save.
Conditional Access MFA breaks Azure AD Connect synchronization exclude user Verify Azure AD Connect sync status You can wait for a maximum of 30 minutes, or if you don’t want to wait that long, force sync Microsoft Entra Connect with PowerShell.
PS C:> Import-Module ADSync PS C:> Start-ADSyncSyncCycle -PolicyType Delta The start time and end time changed to 4/19/2021.
Conditional Access MFA breaks Azure AD Connect synchronization after Green checks for Azure AD Connect sync in Microsoft 365 admin center.
Conditional Access MFA breaks Azure AD Connect synchronization syncing Did this help you to fix the broken Azure AD Connect synchronization after configuring Conditional Access MFA?
Keep reading: Add users to group with PowerShell »
Conclusion You learned why Azure AD Connect synchronization service stopped syncing after implementing Azure AD Multi-Factor Authentication. It’s happing because MFA is enabled on the Azure AD Connect Sync Account. Exclude the Azure AD Connect Sync Account from Azure Conditional Access policy, and it will start syncing.
A better way is to create a security group named Non-MFA and add the Azure AD Connect Sync Account as a member. This way, you will keep it organized if you need to add other service accounts in the future.
Did you enjoy this article? You may also like How to Connect to Microsoft Entra with PowerShell. Don’t forget to follow us and share this article.
Thanks, helped for me as well...
Not an answer, but an additional information. I am having the same issue with varnish. I tried a different docker image to listen on port 80 and it works:
docker run --rm -it -p 80:80 strm/helloworld-http
But varnish gives the same error as author posted. The very same varnish config/run command on a different server works just fine.
So the bottom line, it's not as simple as "requires root to acquire port 80". Cause other images use port 80 just fine. At the same time the very same varnish config on a different server work well. So it must be smth on the cross of a particular docker config and the internals of a varnish image.
The easiest way would be to use paper spigot. It got way more optimizations than normal spigot and you can access the pathfinder of every mob easily.
Villager entity = // your entity
Location location = // your location
entity.getPathfinder().moveTo(location);
https://jd.papermc.io/paper/1.21.1/com/destroystokyo/paper/entity/Pathfinder.html
first, it's getElementsById
Then look https://dev.to/colelevy/queryselector-vs-getelementbyid-166n. This site explains better
use CanIuse too
Well it looks like username: root password: root works
Blockquote 0
Enable it from the Role manager option under the WPBakery (visual composer) settings. Give administrator--> Post Type---> Custom and select the required option to display.
Blockquote
I didn't work, it's not adding the editor option.
Blockquote After scratching my head for a little bit, the solution is to use the wordpress do_shortcode.
The answer is
<?php
$desc = $product->get_description();
echo do_shortcode($desc);
?>
With the above I get the data from visual composer to formated HTML, hope this can help other programmers. good night!
Blockquote
Can you explain a little bit more how you did?, I'm not a developer but I'll appreciate some steps of how to do it.
I write a solution in C++.
here is the link: https://khobaib529.medium.com/solving-electrical-circuits-a-graph-based-algorithm-for-resistance-calculations-921575c59946
There is currently a trial for this in Chrome.
To test the File System Observer API locally, set the
#file-system-observer
flag inabout:flags
. https://developer.chrome.com/blog/file-system-observer
<p style="text-align: center;">
<iframe src="//youtube.com/embed/4liKzXo2lRM?rel=0&autoplay=1";; modestbranding=1&controls=0&showinfo=1" width="700" height="394" frameborder="0" allow="autoplay; allowfullscreen="allowfullscreen">
<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>
</iframe>
</p>
Thank you for all the help! I have a solution below. I used a dictionary but stored the file as key and path as value. This allowed me to get around having duplicate paths. I also used the replace function to convert the path into the proper format.
I will look into the proposed solution of using ttk.Treeview to see if this is a better solution but as of now my program is working as hoped!
Thanks again!
import tkinter as tk
from tkinter import BOTH, LEFT, Button, Frame, Menu, Toplevel, filedialog
import os
ws = tk.Tk()
ws.title('Select Files to Import')
ws.geometry('500x200')
ws.config(bg='#456')
f = ('sans-serif', 13)
btn_font = ('sans-serif', 10)
bgcolor = '#BF5517'
dict = {}
def sort():
global second
second = Toplevel()
second.geometry('800x400')
menubar=Menu(second)
menubar.add_command(label="Save List", command=save)
menubar.add_command(label="Add Files", command=add)
second.config(menu=menubar)
global listbox
listbox = Drag_and_Drop_Listbox(second)
listbox.pack(fill=tk.BOTH, expand=True)
directory = filedialog.askopenfilenames()
n = 0
for file in directory:
key = os.path.basename(file)
value = os.path.dirname(file)
update_dict(key, value)
listbox.insert(n, os.path.basename(file))
n=n+1
def update_dict(key, value):
if key in dict:
dict[key].append(value)
else:
dict[key] = [value]
def add():
directory = filedialog.askopenfilenames()
n = 0
for file in directory:
key = os.path.basename(file)
value = os.path.dirname(file)
update_dict(key, value)
listbox.insert(n, os.path.basename(file))
n=n+1
def save():
image=listbox.get(0, listbox.size())
for x in image:
string = str(dict[x]).replace('[','').replace(']','').replace('/','\\').replace('\'','')
print(string + '\\' + x)
class Drag_and_Drop_Listbox(tk.Listbox):
def __init__(self, master, **kw):
kw['selectmode'] = tk.SINGLE
kw['activestyle'] = 'none'
tk.Listbox.__init__(self, master, kw)
self.bind('<Button-1>', self.getState, add='+')
self.bind('<Button-1>', self.setCurrent, add='+')
self.bind('<B1-Motion>', self.shiftSelection)
def setCurrent(self, event):
self.curIndex = self.nearest(event.y)
def getState(self, event):
i = self.nearest(event.y)
self.curState = self.selection_includes(i)
def shiftSelection(self, event):
i = self.nearest(event.y)
if self.curState == 1:
self.selection_set(self.curIndex)
else:
self.selection_clear(self.curIndex)
if i < self.curIndex:
x = self.get(i)
selected = self.selection_includes(i)
self.delete(i)
self.insert(i+1, x)
if selected:
self.selection_set(i+1)
self.curIndex = i
elif i > self.curIndex:
x = self.get(i)
selected = self.selection_includes(i)
self.delete(i)
self.insert(i-1, x)
if selected:
self.selection_set(i-1)
self.curIndex = i
frame = Frame(ws, padx=20, pady=20, bg=bgcolor)
frame.pack(expand=True, fill=BOTH)
btn_frame = Frame(frame, bg=bgcolor)
btn_frame.grid(columnspan=2, pady=(50, 0))
sort_btn = Button(
btn_frame,
text='Individual Sort',
command=sort,
font=btn_font,
padx=10,
pady=5
)
sort_btn.pack(side=LEFT, expand=True, padx=(5,5))
# mainloop
ws.mainloop()
The solution is to add this to the editor options :
fixedOverflowWidgets: true,
S D:\FasoSmart\Enroulement\enrollement\next-js> npm install @mui/material --force npm warn using --force Recommended protections disabled. npm warn ERESOLVE overriding peer dependency npm warn While resolving: @mui/[email protected] npm warn Found: [email protected] npm warn node_modules/next npm warn next@"^15.0.3" from the root project npm warn npm warn Could not resolve dependency: npm warn peer next@"^13.0.0 || ^14.0.0" from @mui/[email protected] npm warn node_modules/@mui/material-nextjs npm warn @mui/material-nextjs@"^5.16.6" from the root project npm warn npm warn Conflicting peer dependency: [email protected] npm warn node_modules/next npm warn peer next@"^13.0.0 || ^14.0.0" from @mui/[email protected] npm warn node_modules/@mui/material-nextjs npm warn @mui/material-nextjs@"^5.16.6" from the root project
Check if version and release are properly set in conf.py: In Sphinx, the version and release are typically defined in the conf.py configuration file. The version represents the short version number, and the release can be the full version number (e.g., 1.0.0).
I created a conversation using conversation memory then asked the llm to summarize the conversation. If you use a vector database to initialize the conversation then assess the memory message chat history
conversation = [
{"user": "Hey Louise, can you help me with something?"},
{"ai": "Of course! What do you need help with today?"},
{"user": "I’m trying to organize a surprise birthday party for my friend. Any ideas?"},
{"ai": "That sounds fun! What does your friend enjoy? We can brainstorm a theme based on their interests."},
{"user": "They love the outdoors and hiking. I was thinking of doing something in a park."},
{"ai": "Great idea! How about a picnic party? You could set up a cozy area with blankets, snacks, and decorations that reflect nature. Maybe even some outdoor games!"},
{"user": "I love that! What kind of snacks should I include?"},
{"ai": "Consider packing finger foods that are easy to eat outdoors. Sandwiches, fruit skewers, veggie platters, and some homemade cookies would be perfect. Don’t forget refreshing drinks!"},
{"user": "Good call! How about decorations? I want it to feel special."},
{"ai": "You could use biodegradable decorations like paper lanterns, wildflower bouquets, and string lights. A nature-themed cake would also be a nice centerpiece!"},
{"user": "These are awesome suggestions! How many people should I invite?"},
{"ai": "It depends on the size of the park area you’ll be using. A smaller group of close friends might make it feel more intimate, while a larger group could bring more energy. What do you think?"},
{"user": "Maybe around 10-15 people. That feels manageable."},
{"ai": "Perfect! Just remember to send out the invites soon so everyone can mark their calendars. Do you need help with the invitation wording?"},
{"user": "Yes, please! I want it to be fun and casual."},
{"ai": "How about this: “Join us for a surprise outdoor adventure to celebrate [Friend’s Name]’s birthday! Bring your favorite snacks and your love for nature. Let’s make some unforgettable memories!”"},
{"user": "I love it! Thanks, Louise. You’ve been a huge help."},
{"ai": "Anytime! Have a blast planning the party, and let me know if you need anything else."}
]
def example_tool(input_text):
system_prompt = "You are a Louise AI agent. Louise, you will be fair and reasonable in your responses to subjective statements. Logic puzzle the facts or theorize future events or optimize facts providing resulting inferences. Think"
return f"{system_prompt} Processed input: {input_text}"
# Initialize the LLM
llm = LangChainChatOpenAI(model="gpt-4o-mini", temperature=0, openai_api_key=key)
# Define tools
tools = [
Tool(
name="ExampleTool",
func=example_tool,
description="A simple tool that processes input text."
)
]
# Initialize memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Loop through the conversation and add messages to memory
for message in conversation:
if "user" in message:
memory.chat_memory.add_user_message(message["user"])
elif "ai" in message:
memory.chat_memory.add_ai_message(message["ai"])
# Initialize the agent with memory
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
memory=memory
)
# Query to recall previous discussion
query = "Tell me in detail about our previous discussion about the party. Louise enumerate the foods that will be served at the party."
response = agent.run(query)
# Print the response
print(response)
print(memory.chat_memory.messages)
Sure! Here's a more detailed step-by-step manual for setting up Django in a virtual environment for a new project:
When working with multiple Django projects, it's best practice to create a new virtual environment (virtualenv) for each project to keep dependencies isolated and avoid conflicts. This guide walks you through the steps to install Django in a virtual environment.
virtualenv
(if you haven’t already)If you don’t have virtualenv
installed, you can install it globally using pip
:
pip install virtualenv
Navigate to the directory where you want to create your new project, and then create a new virtual environment. Replace myprojectenv
with your desired virtual environment name.
virtualenv myprojectenv
This will create a new folder called myprojectenv
containing the isolated environment.
Once the environment is created, activate it. The method to activate depends on your operating system:
On macOS/Linux:
source myprojectenv/bin/activate
On Windows:
myprojectenv\Scripts\activate
When activated, your command prompt will change to show the name of the virtual environment (e.g., (myprojectenv)
).
With the virtual environment active, install Django using pip
. This will install the latest stable version of Django:
pip install django
Once Django is installed, you can create your new Django project using the django-admin
tool. Replace myproject
with the name of your project.
django-admin startproject myproject
This will create a new myproject
directory with the necessary files to get started with Django.
To make sure Django was installed successfully, you can check the version of Django by running:
django-admin --version
This should output the installed version of Django.
requirements.txt
File (Optional but Recommended)To keep track of your project’s dependencies, you can generate a requirements.txt
file. This file can later be used to recreate the environment.
Run the following command to generate a requirements.txt
file for your project:
pip freeze > requirements.txt
This will list all the installed packages in the environment, including Django, in a file named requirements.txt
.
Once you’re done working in the virtual environment, you can deactivate it by running:
deactivate
This will return you to your system's default Python environment.
requirements.txt
ensures that you (and others) can recreate the same environment on different machines with the exact same dependencies.If you start a new Django project and want to set up a new virtual environment:
requirements.txt
(Step 7) if needed.Each time you create a new virtual environment, you'll need to reinstall Django and other dependencies, but this ensures your projects remain isolated and have their own specific package versions.
virtualenv
: If you encounter a "command not found" error when trying to use virtualenv
, make sure it's installed globally by running pip install virtualenv
.pip freeze
to generate requirements.txt
regularly to capture any changes in your environment’s dependencies.virtualenvwrapper
or pipenv
can simplify working with multiple virtual environments and dependencies.This workflow ensures each Django project has a clean, isolated environment with the correct dependencies, providing better stability and compatibility across different projects.
Create your table with NOT NULL
primary key constraint
create table if not exists inx_test_table
(
id int unsigned NOT NULL auto_increment primary key,
...
)
And add $fillable
in your model
protected $fillable = [
'name'
];
I had the same question and it turns out that the figure needs to be in it's own paragraph (preceded and followed by a newline and two spaces) in order to show the caption, as described here.
Try to add this
@EnableJpaRepositories(basePackages = "com.soib.kwbo.repository")
and check your datasource is properly configured.. :D
Thanks to @Gustav for altering me to your post.
As my article made clear, when the Total Row feature was added, The Access team didn't add a VBA approach to enable / disable the total rows in form VBA. 17 years on from 2007, they aren't going to do so now! However, the code I provided in my article using CommandBars does the job perfectly
Application.CommandBars.ExecuteMso "RecordsTotals"
Indeed, I supplied an example app which shows this code in use in a datasheet form. NOTE: So I could include buttons on the datasheet, its actually a split form with the single record view hidden
So I'm not sure what functionality you want that I haven't provided. Feel free to email me using the link in my article if you need further help. We can always add further info to this thread later for the benefit of others.
The RAS daemon (rasdaemon) provides AER reporting capabilities. Can be found in both Redhat and Ubuntu Linux.
To add a little more understanding to this:
What you're trying to do with ext['bouncycastle.version'] = '1.72'
is override a variable that is created and used by the io.spring.dependency-management
plugin you likely have imported at the top of your build.gradle.
Those variables are not exhaustive, nor are they magic. They cover a lot of very common packages, but there are plenty that they don't cover.
Here is a list from the spring docs of the variables that the plugin introduces (or rather, most of them. You can check the source code under the tag for your spring version for a comprehensive list). You'll notice that bouncycastle.version
isn't a variable that the plugin actually creates or references.
For transitives that aren't covered by the plugin, I find that gradle's recommended default solution of constraints is best. However, it doesn't give us a remedy for the problem we have here, which is that the desired version of the package only exists in a new package that the old package has moved to.
I didn't find a stellar answer to this question anywhere, so I ended up implementing this at the top of my build.gradle:
configurations.all {
resolutionStrategy {
eachDependency { DependencyResolveDetails details ->
if (details.requested.group == 'org.bouncycastle' && details.requested.name == 'bcpkix-jdk15on') {
details.useTarget('org.bouncycastle:bcpkix-jdk18on:1.78')
}
if (details.requested.group == 'org.bouncycastle' && details.requested.name == 'bcprov-jdk15on') {
details.useTarget('org.bouncycastle:bcprov-jdk18on:1.78')
}
}
}
}
I'm not in love with this solution, but of everything I've seen it's the most direct answer to the question "how do I replace jdk15on with jdk18on?": you identify usages of jdk15on and replace them with jdk18on.
The org.jboss.arquillian:arquillian-bom
version you are using is from 2017 - https://mvnrepository.com/artifact/org.jboss.arquillian/arquillian-bom - while the org.jboss.arquillian.junit:arquillian-junit-core
is recent. So the classpath misalignment is very likely the culprit.
Also, obtain complete stacktraces and try running maven with -X
,--debug
for more debug information.
Got the similar issue here. "numpy._core._exceptions._ArrayMemoryError: Unable to allocate 3.58 GiB for an array with shape 500000000"
My PC has 16G mem, but it is not able to allocate 3.58G for an array. It seems to be the contiguous memory management issue "NumPy requires a contiguous block of memory to allocate the array. Even if your system has enough total memory, it might not have a single contiguous block large enough to satisfy the request".
Maybe try using mem-mapped array in numpy: data = np.memmap('data.dat', dtype='int64', mode='w+', shape=(500000000,)) Or breaking the dataset into smaller chunks and process them separately.
As you've shown, you cannot do this.
You should follow your ideal path and follow @linda-lawton-daimto advice.
Workspace APIs including Slides are not governed by Cloud IAM and so setting the Editor role on the Service Account is ineffectual. Workspace APIs work with OAuth scopes which you're correctly setting.
The problem is that Workspace APIs protect user content and the only way an arbitrary Service Account can access Workspace API user-content is:
Domain-wide delegation to approve the Service Account for this purpose Designating a specific user for the DWD'd Service Account to impersonate. There's a hacky (!?) alternative approach which involves sharing a Workspace document (e.g. Slides presentation) with an arbitrary Service Account's email address (~ {account}@{project}.iam.gserviceaccount.com).
I am using Superset 4.0.2, it is possible to export dashboards AND charts in PDF and image (jpeg).
import androidx.compose.ui.text.intl.Locale
val local = Locale.current
java.util.Locale(locale.language)
download links 32bit and 64 bit -
https://cdn.mysql.com/archives/mysql-8.0/mysql-server_8.0.33-1ubuntu18.04_amd64.deb-bundle.tar
https://cdn.mysql.com/archives/mysql-8.0/mysql-server_8.0.33-1ubuntu18.04_i386.deb-bundle.tar
https://downloads.mysql.com/archives/community/?version=8.0.33
thanks
EarlyStoppingRounds
was added to XGBoost
in 1.4.0
.
If the version is below 1.4.0, upgrade XGBoost with:
pip install --upgrade xgboost
I've found a solution using the <wbr>
element and a little bit of JavaScript.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/wbr
Here is the code :
let element = document.getElementById("paragraph"); // Get the element
let width = element.offsetWidth; // Get the current width of the element
element.style.width = width - 1 + "px"; // Set the new width to be the original width minus 1px
#paragraph {
white-space: nowrap;
width: fit-content
}
<p id="paragraph">
one two<wbr /> three four
</p>
What you describe was a bug in FMX in 10.3, and presumably in 10.4 as well. It was fixed in a later version.
Thanks for the A2A. @A_B : I have gone through both of the microservices code and seems in RouteValidator in the line : public static final List openApiEndpoints = List.of("/api/auth/v1/register","/api/auth/v1/token","/eureka","/api-docs");
just add "/api/auth/v1/"
public static final List openApiEndpoints = List.of("/api/auth/v1/","/api/auth/v1/register","/api/auth/v1/token","/eureka","/api-docs");
As 403 forbidden error indicates : A 403 Forbidden Error occurs when you do not have permission to access a web page or something else on a web server
I hope you are aware of this error code , since the request can't even hit the controller itself thus in sysout or logger info you are not getting display for /getRolesByUsername.
For your reference please check for the error code: https://www.howtogeek.com/357785/what-is-a-403-forbidden-error-and-how-can-i-fix-it/
Love using answers from 2009! (re: Susannah [Google Employee]) You can also disable the mouse-scroll:
//cancels scroll
google.maps.event.addDomListener(div, 'mousewheel', cancelEvent);
Not working:
System.Threading.ThreadStateException: 'The ActiveX control '8856f961-340a-11d0-a96b-00c04fd705a2' cannot be instantiated because the current thread is not in a single-threaded container.'
Ionic provides ion-footer component. https://ionicframework.com/docs/api/footer
with the help of one of my friends here who guided me, I modified the js code like this:
window.onscroll = function () {
Scroll_Indicator_blog_function()
};
function Scroll_Indicator_blog_function() {
var winScroll = document.body.scrollTop || document.body.scrollTop;
var height = document.body.scrollHeight - document.body.clientHeight;
var scrolled = (winScroll / height) * 100;
document.getElementById("Scroll_Indicator_blog").style.width = scrolled + "%";
}
.progress-container {
position: fixed;
top: 0;
display: block;
width: 100%;
height: 8px;
background: var(--vesal-background-color7);
z-index: 9999;
}
.progress-bar {
height: 100%;
background: var(--vesal-background-color2);
box-shadow: 0 5px 10px 0 var(--vesal-boxshadow-color-1);
width: 0%;
transition: ease-in-out 0.5s;
}
<div class="progress-container">
<div class="progress-bar" id="Scroll_Indicator_blog"></div>
</div>
But now I came to a new problem! In the height calculated by this script, the height continues from the highest point of the Mojo account to the bottom of the page, that is, from the top of the header to the bottom of the footer, which is naturally not interesting to show how much of the content has been read by the reader! How can I make it so that the height is only limited to the section in question, that is, for example, it counts the height of the page from the beginning of a section to its end, and does not count the comments section, header and footer? Thanks again for the advice
Use cookies. When user logs in, send cookie with proper domain with name lets say "auth-cookie". It will contain your jwt token. These cookies will be automatically sent back to you everytime user sends a request. Set httpOnly field to true so that hijacker cannot read the token using javascript. On logout, set the "auth-cookie" cookie to blank.
Stemming isn’t really intended to handle transformations like "America" to "American" (or vice versa), because they don’t share a common morphological root. Stemming generally aims to reduce inflected forms of a word to a base form (e.g., "running" to "run"), focusing on suffix-stripping rather than transforming between nouns and adjectives.
I created a script and achieved the same results with Porter Stemming as you find. However, using Lancaster stemming from NLTK I achieved Runner --> Run
Add watermark to ChartJS 4.4.6:
const myCustomPlugins = {
// beforeDraw or afterDraw
beforeDraw:function(chartInstance){
// Get ctx of chart
const context = chartInstance.ctx;
// Set backgournd color for chart
context.fillStyle = "white";
// Set sizes and positions
context.fillRect(0, 0, context.canvas.width, context.canvas.height);
// Create new image
let img = new Image();
// Set srouce of image
img.src = "assets/images/watermark.svg";
// Draw the image on the canvas context
context.drawImage(img,context.canvas.width/4-img.width/2,context.canvas.height/4-img.height/2);
}
}
new Chart(document.getElementById('myChart'),{
// Set your chart type, chart data, chart options here
plugins: [myCustomPlugins]
});
public void AddNewColumn(string columnName)
{
string query = $"ALTER TABLE YourTable ADD COLUMN {columnName} TEXT";
using (SQLiteConnection connection = new SQLiteConnection(connectionString))
{
SQLiteCommand command = new SQLiteCommand(query, connection);
connection.Open();
command.ExecuteNonQuery();
}
}
Replace desired_capabilities with options as
from selenium.webdriver.chrome.options import Options as ChromeOptions driver = webdriver.Remote(command_executor=executor, options=options)
Simply create a feed and call it, say, bookmark.
If a user bookmark a post on, say another feed called "timeline", then you would add reaction on that post of kind called, say, "bookmark", and at the same time you would use "targetFeeds" to put it there.
Example: const bookmark = await client.reactions.add( 'bookmark', {id: '6d47e970-d317-11eb-8080-800109d57843'}, {}, { targetFeeds: ["bookmark:zoro"] } );
Then that feed will have these bookmarks.
You can check "own reaction" to see if activity on main feed (e.g timeline), which you did make the bookmark from, is already bookmarked or not and show banner to user or icon. And of course you have now the bookmark feed which can list all the bookmarks.
All likes/comments would be visible on all feeds, as this is fan out.
I tried installing using the pip install picamera and it installed but is this different than picamerazero? Or are they essentially the same?
I was also getting an error trying to install picamerazero. It said the following:
"× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
This module only works on linux
[end of output]"
I'm trying to download on a Windows computer as well.
I would do this as an inset axes rather than subplots, then the axes will line up, and the decorations on the top axes will not move the axes:
I know this is an old question but for anyone looking, one solution may be to clear your cached images and data in your browser.
This has been particularly prevalent for me on Google Chrome, especially if any CSS changes were made.
On Chrome (Windows):
Hit Ctrl+Shift+Delete to bring up the "Delete Browsing Data..." menu.
Select only the "Cached images and files" checkbox.
Press "Delete data"
You may need to refresh the page, but the bulk of the time I've encountered this issue working with Blazor this has remedied the issue.
Found a couple of things. The reason I was getting the ERR_TOO_MANY_REDIRECTS, was due to the system looking for a PageNotFound view and there wasn't one. So added one and that fixed that problem.
Loading the UserList page was giving the Status Code: 302 Found. This was a middleware problem -
Category: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware EventId: 1 SpanId: 2ff3e9926a1b6c03 TraceId: 7314731274212a962b5c1323b0299200 ParentId: 0000000000000000 RequestId: 400070d4-0005-f900-b63f-84710c7967bb RequestPath: /UserAdmin/UserList An unhandled exception has occurred while executing the request.
The problem was System.InvalidOperationException: There is already an open DataReader associated with this Connection which must be closed first.
This was solve by adding
MultipleActiveResultSets=True
to the connection string.
Thank you for your help.
This also happened to me today as well. How to fix this?
For what it's worth - my issue was that my .env file was missing the '.' (aka I named it just 'env').
settings "terminal.integrated.shellIntegration.enabled" to false fix the issue. This settings seem to impact other issue related to other terminal too
I know this topic has an accepted answer, but the answer redirects towards the use of Tkinter.
For anyone looking for a Dearpygui specific solution, I gave an answer here : https://stackoverflow.com/questions/79189436/how-to-display-a-gif-in-dearpygui/79189437#79189437 on how to animate gif and videos in dpg.
NB: Sound is however not supported by my solution.
I was getting the same error and was able to fix it downgrading SQLite to 1.0.113. It seems that this error was introduced in Sqlite version 1.0.115. See here:
https://sqlite.org/forum/info/f80d7580fc07ce5c
I was using .net 8 and sqlite 1.0.119. On the development computer it was working just fine but when deployed on an container (windows nano server ltsc 2019) I got that error.
So I downgraded the SQLite package version on my Visual Studio project to 1.0.113.0. But note that it is not available directly in the nuget package repository. You have to download it from the Sqlite website:
And then, under Visual Studio add a folder where this file is located (anywhere, in my case the project folder) as a local package repository (right-click on the project -> Manage NuGet Packages -> click on the wheel at the top right -> Package sources -> click on "+" -> set the path below. done
were you able to modify fields via script? I had the same question and your code helped me. I'm trying to modify via mutation, but it doesn't seem to work.
did you get a working solution for this?
Model classes should generally be final.
Abstract classes are incomplete classes with few method implementations pending.
If you want an Entity which is abstract, you are perhaps trying to implement some feature on top of your entity.
You can do that, but it wouldn't be advisable. Those logics you can be put inside a service class which will perform the CRUD operation. This fulfils "Single Responsibility" principle
height: 100vh;
@supports (height: 100dvh) {
height: 100dvh;
}
The issue was that the append action was called on one point after the finish action, which led to the creation of a new file with the _unfinished suffix, even though the file had already been marked as 'finished'.
I couldn’t find the right answer to the problem with the prometheus.io/scrape annotation not working in Kubernetes, so I decided to dig into the original Prometheus Helm chart. For those facing the same issue, here’s an explanation.
Understanding Prometheus Configurations in Kubernetes
First, it’s important to note that Prometheus can be configured in multiple ways within Kubernetes. One common method is using the Custom Resource Definition (CRD) called ServiceMonitor. In this setup, the Prometheus Operator continuously monitors resources specified by ServiceMonitor objects.
• serviceMonitorSelector Parameter: This parameter in the Prometheus Operator configuration selects which ServiceMonitor resources to consider.
serviceMonitorSelector: matchLabels: team: frontend
• No Default Annotations: By default, services or pods aren’t monitored based on annotations alone. You need to:
• Create a ServiceMonitor: Define a ServiceMonitor resource that matches your services or pods.
• Set Appropriate Labels: Ensure your services or pods have labels that match the matchLabels in your ServiceMonitor.
However, the default kube-prometheus-stack Helm chart doesn’t create a ServiceMonitor for your deployments out of the box.
The Origin of prometheus.io/scrape Annotation
This brings up the question:
Where does the prometheus.io/scrape annotation come from, and how can I use it?
The answer lies in the original Prometheus Helm chart, which you can find here. Unlike the kube-prometheus-stack, this Helm chart doesn’t rely on Prometheus CRDs. Instead, it:
• Deploys Prometheus Directly: Runs Prometheus in a pod with manual configurations.
• Uses kubernetes_sd_configs: Specifies how Prometheus should discover services.
kubernetes_sd_configs:
This tells Prometheus to use the endpoints role, allowing it to scrape targets based on annotations like prometheus.io/scrape.
• Relabeling Configurations: Includes additional settings to manipulate labels and target metadata.
How to Resolve the Issue
If you’re using kube-prometheus-stack, you have two main options:
1. Set Up a ServiceMonitor
• Create a ServiceMonitor Resource: Define it to match your services or pods.
• Adjust serviceMonitorSelector: Ensure the Prometheus Operator picks up your ServiceMonitor.
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: my-service-monitor labels: team: frontend spec: selector: matchLabels: app: my-app endpoints: - port: web
2. Modify Prometheus Configuration
• Include endpoints Role: Adjust your Prometheus config to use the endpoints role like in the original Helm chart.
• Leverage Annotations: This allows you to use annotations like prometheus.io/scrape without needing ServiceMonitor.
prometheus: prometheusSpec: additionalScrapeConfigs: - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true
Summary
If the prometheus.io/scrape annotation isn’t working with kube-prometheus-stack:
• Use a ServiceMonitor: It’s the preferred method when using the Prometheus Operator.
• Copy Configuration from Original Helm Chart: Adjust your Prometheus configuration to manually include endpoint discovery based on annotations.
By following these steps, you should be able to enable Prometheus to scrape your services or pods as expected.
I have the same error. The strange thing is that it gives me error now that I recreated the project with git clone. On another pc with the same project and libraries it gives no problem.
Were you able to solve it?
https://github.com/j-oss2023/mysql-dynamic-inventory
I check your solution, but it post following problem: [WARNING]: * Failed to parse ca-dynamic-inventory-sqlserver.yaml with auto plugin: No module named 'pyodbc' [WARNING]: * Failed to parse ca-dynamic-inventory-sqlserver.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory [WARNING]: * Failed to parse ca-dynamic-inventory-sqlserver.yaml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port. [WARNING]: Unable to parse ca-dynamic-inventory-sqlserver.yaml as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available
Do you have any suggest?
activity
Premium Content Area
You are currently in a trial version, please take a monthly subscription to enjoy full and unlimited access to Instagram hacking panel (view/edit password, send/read messages...)
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>2.6.0</version>
</dependency>
Use this one. Remove any other configuration you might have added. Access it on ${baseUrl}/swagger-ui/index.html
If I search for that element in chrome console as:
$('#screening_questions[0].multiple_choice[0]-dealbreakerField')
The XPath selection method in the Chrome browser console is:
$x('xpathValue')
A more detailed explanation of the XPath selection console feature can be found here: https://stackoverflow.com/a/22571294/512463
Hello it seem this error came from torch, you can tried this, note that my python version is 3.10.15
conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 -c pytorch