The missing piece not mentioned in any of these answers nor any other search result I can find (though the OP hints at it) is that the date cannot be a string, it must be a string CAST AS A DATE OBJECT
set startDate to date "Sunday, April 20, 2025 at 12:00:00 AM"
Make sure your MetaMask (or other wallet) is unlocked and you're on the correct network (e.g., Ethereum Mainnet, Polygon, etc.).
Some dApps only support specific networks. If you're on the wrong one, the dApp won’t recognize your wallet.
Check the docs here
Both appSettings
and connectionStrings
need to be array, but you not pass connectionStrings
as array instead of using object param connectionStrings object = {}
First thanks for your kindly answer . Sure, I know this ,but even added the seperator after every line or I adjusted them into one single line manualy in make file.,but after typing make command again, there still has such error:
$ make
C:/Program Files/GNU MCU Eclipse/Build Tools/2.12-20190422-1053/bin/make all-recursive
make[1]: Entering directory 'E:/download/opencore-amr-0.1.3'
Making all in amrnb
/usr/bin/sh: line 18: C:/Program: No such file or directory
make[1]: *** [Makefile:344: all-recursive] Error 1
make[1]: Leaving directory 'E:/download/opencore-amr-0.1.3'
make: *** [Makefile:275: all] Error 2
Because the make file is created by configure course,so I guess the original reason is caused by the configure file,but i don't know how to modify the configure file
Thansk for your reply.
Best regards
Stephen
https://bobj-board.org/t/page-headers-in-a-subreport/68337/4
This is useful for me
In the subreport, create a formula: FakePageHeader
WhileReadingRecords;
" "
Go to the ‘Insert’ menu and click ‘Group’. Select the FakePageHeader
formula.
Select the ‘Repeat Group Header on Each New Page’ option, and click ‘OK’. This inserts a new group at the lowest, or innermost, grouping level. You will need to move this group to the highest, or outermost, grouping level.
Go to ‘Report’ menu and click ‘Group Expert’. Use the up arrow to move this newest group up to the top of the list.
Move all the headers that you would like repeated into this Header for the @FakePageHeader group.
Compose Material3 Version 1.4.0-alpha10
minTabWidth: Dp = TabRowDefaults.ScrollableTabRowMinTabWidth
The reflect workaround in another answer is broken because they renamed the field, although no longer needed.
You can also specify missing values during DataFrame creation, specially if you are reading this from a file.
import pandas as pd
df = pd.read_csv("some_file.csv", na_values=["?"])
gradle8+:
I want to change it to compile to the same directory with java:
compileKotlin {
destinationDirectory = file("build/classes/java/main")
}
Reference: https://github.com/oapi-codegen/oapi-codegen https://github.com/oapi-codegen/oapi-codegen/tree/main/examples/petstore-expanded/gin
Environment: go1.24.2, https://go.dev/dl/
Init go module
mkdir petstore
cd petstore
go mod init petstore
Get tool oapi-codegen
go get -tool github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen@latest
After this step, the tool is added in the go.mod
Prepare the config for oapi-codegen The full config options can be referenced in this schema is https://github.com/oapi-codegen/oapi-codegen/blob/main/configuration-schema.json
server.cfg.yaml
package: main
generate:
gin-server: true
output: petstore-server.gen.go
types.cfg.yaml
package: main
generate:
models: true
output: petstore-types.gen.go
Prepare the openapi file
petstore-expanded.yaml
openapi: "3.0.0"
info:
version: 1.0.0
title: Swagger Petstore
description: A sample API that uses a petstore as an example to demonstrate features in the OpenAPI 3.0 specification
termsOfService: https://swagger.io/terms/
contact:
name: Swagger API Team
email: [email protected]
url: https://swagger.io
license:
name: Apache 2.0
url: https://www.apache.org/licenses/LICENSE-2.0.html
servers:
- url: https://petstore.swagger.io/api
paths:
/pets:
get:
summary: Returns all pets
description: |
Returns all pets from the system that the user has access to
Nam sed condimentum est. Maecenas tempor sagittis sapien, nec rhoncus sem sagittis sit amet. Aenean at gravida augue, ac iaculis sem. Curabitur odio lorem, ornare eget elementum nec, cursus id lectus. Duis mi turpis, pulvinar ac eros ac, tincidunt varius justo. In hac habitasse platea dictumst. Integer at adipiscing ante, a sagittis ligula. Aenean pharetra tempor ante molestie imperdiet. Vivamus id aliquam diam. Cras quis velit non tortor eleifend sagittis. Praesent at enim pharetra urna volutpat venenatis eget eget mauris. In eleifend fermentum facilisis. Praesent enim enim, gravida ac sodales sed, placerat id erat. Suspendisse lacus dolor, consectetur non augue vel, vehicula interdum libero. Morbi euismod sagittis libero sed lacinia.
Sed tempus felis lobortis leo pulvinar rutrum. Nam mattis velit nisl, eu condimentum ligula luctus nec. Phasellus semper velit eget aliquet faucibus. In a mattis elit. Phasellus vel urna viverra, condimentum lorem id, rhoncus nibh. Ut pellentesque posuere elementum. Sed a varius odio. Morbi rhoncus ligula libero, vel eleifend nunc tristique vitae. Fusce et sem dui. Aenean nec scelerisque tortor. Fusce malesuada accumsan magna vel tempus. Quisque mollis felis eu dolor tristique, sit amet auctor felis gravida. Sed libero lorem, molestie sed nisl in, accumsan tempor nisi. Fusce sollicitudin massa ut lacinia mattis. Sed vel eleifend lorem. Pellentesque vitae felis pretium, pulvinar elit eu, euismod sapien.
operationId: findPets
parameters:
- name: tags
in: query
description: tags to filter by
required: false
style: form
schema:
type: array
items:
type: string
- name: limit
in: query
description: maximum number of results to return
required: false
schema:
type: integer
format: int32
responses:
'200':
description: pet response
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Pet'
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
post:
summary: Creates a new pet
description: Creates a new pet in the store. Duplicates are allowed
operationId: addPet
requestBody:
description: Pet to add to the store
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/NewPet'
responses:
'200':
description: pet response
content:
application/json:
schema:
$ref: '#/components/schemas/Pet'
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/pets/{id}:
get:
summary: Returns a pet by ID
description: Returns a pet based on a single ID
operationId: findPetByID
parameters:
- name: id
in: path
description: ID of pet to fetch
required: true
schema:
type: integer
format: int64
responses:
'200':
description: pet response
content:
application/json:
schema:
$ref: '#/components/schemas/Pet'
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
delete:
summary: Deletes a pet by ID
description: deletes a single pet based on the ID supplied
operationId: deletePet
parameters:
- name: id
in: path
description: ID of pet to delete
required: true
schema:
type: integer
format: int64
responses:
'204':
description: pet deleted
default:
description: unexpected error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
schemas:
Pet:
allOf:
- $ref: '#/components/schemas/NewPet'
- required:
- id
properties:
id:
type: integer
format: int64
description: Unique id of the pet
NewPet:
required:
- name
properties:
name:
type: string
description: Name of the pet
tag:
type: string
description: Type of the pet
Error:
required:
- code
- message
properties:
code:
type: integer
format: int32
description: Error code
message:
type: string
description: Error message
Generate the source code
generate server code
go tool oapi-codegen -config server.cfg.yaml petstore-expanded.yaml
generate models code
go tool oapi-codegen -config types.cfg.yaml petstore-expanded.yaml
add missing dependencies
go mod tidy
implement petstore.go
based on the generated interface
package main
import (
"fmt"
"net/http"
"sync"
"github.com/gin-gonic/gin"
)
type PetStore struct {
Pets map[int64]Pet
NextId int64
Lock sync.Mutex
}
func NewPetStore() *PetStore {
return &PetStore{
Pets: make(map[int64]Pet),
NextId: 1000,
}
}
// sendPetStoreError wraps sending of an error in the Error format, and
// handling the failure to marshal that.
func sendPetStoreError(c *gin.Context, code int, message string) {
petErr := Error{
Code: int32(code),
Message: message,
}
c.JSON(code, petErr)
}
// FindPets implements all the handlers in the ServerInterface
func (p *PetStore) FindPets(c *gin.Context, params FindPetsParams) {
p.Lock.Lock()
defer p.Lock.Unlock()
var result []Pet
for _, pet := range p.Pets {
if params.Tags != nil {
// If we have tags, filter pets by tag
for _, t := range *params.Tags {
if pet.Tag != nil && (*pet.Tag == t) {
result = append(result, pet)
}
}
} else {
// Add all pets if we're not filtering
result = append(result, pet)
}
if params.Limit != nil {
l := int(*params.Limit)
if len(result) >= l {
// We're at the limit
break
}
}
}
c.JSON(http.StatusOK, result)
}
func (p *PetStore) AddPet(c *gin.Context) {
// We expect a NewPet object in the request body.
var newPet NewPet
err := c.Bind(&newPet)
if err != nil {
sendPetStoreError(c, http.StatusBadRequest, "Invalid format for NewPet")
return
}
// We now have a pet, let's add it to our "database".
// We're always asynchronous, so lock unsafe operations below
p.Lock.Lock()
defer p.Lock.Unlock()
// We handle pets, not NewPets, which have an additional ID field
var pet Pet
pet.Name = newPet.Name
pet.Tag = newPet.Tag
pet.Id = p.NextId
p.NextId++
// Insert into map
p.Pets[pet.Id] = pet
// Now, we have to return the NewPet
c.JSON(http.StatusCreated, pet)
}
func (p *PetStore) FindPetByID(c *gin.Context, petId int64) {
p.Lock.Lock()
defer p.Lock.Unlock()
pet, found := p.Pets[petId]
if !found {
sendPetStoreError(c, http.StatusNotFound, fmt.Sprintf("Could not find pet with ID %d", petId))
return
}
c.JSON(http.StatusOK, pet)
}
func (p *PetStore) DeletePet(c *gin.Context, id int64) {
p.Lock.Lock()
defer p.Lock.Unlock()
_, found := p.Pets[id]
if !found {
sendPetStoreError(c, http.StatusNotFound, fmt.Sprintf("Could not find pet with ID %d", id))
}
delete(p.Pets, id)
c.Status(http.StatusNoContent)
}
Implement the main.go
package main
import (
"log"
"github.com/gin-gonic/gin"
)
func main() {
petStoreAPI := NewPetStore()
router := gin.Default()
RegisterHandlers(router, petStoreAPI)
log.Println("Starting server on :8080")
if err := router.Run(":8080"); err != nil {
log.Fatalf("Failed to start server: %v", err)
}
}
Start the server
go run .
curl the api
curl -I -X GET localhost:8080/pets
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Thu, 17 Apr 2025 01:30:45 GMT
Content-Length: 4
I have faced the same issue is there any solution or sholud I use another controller
probably still on php 7.4 when wordpress is coded to run on 8+
I got this error when running on the studio app in windows,
this may be related to the flatfile sqlight plugin
You could encode "?"
as missing value:
import pandas as pd
data = {"x": [1, 2, "?"], 'y': [3, "?", 5]}
df = pd.DataFrame(data)
print(df.isnull().sum())
# x 0
# y 0
df = df.replace("?", pd.NA)
print(df.isnull().sum())
# x 1
# y 1
Possible reasons:
The for loop is called before showingEventList is assigned with an array.
Or the method getOldEvents is returning a single element in actionArray. So the actionArray.map is creating a single object instead of an array.
For Windows Users:
Powershell opens by default in VS code.
RUN:
python -m venv venv
RUN:
.\venv\Scripts\Activate.ps1
For Linux/Mac Users:
Zsh (Unix shell) opens by default in VS code.
RUN:
python3 -m venv venv
RUN:
source venv/bin/activate
In service B, your method signature returns
ResponseEntity<Mono<Void>>
instead of
ResponseEntity<Void>
Especially when in service A you have
.bodyToMono(Void.class)
You can replace the formula after copying (making sure that the new references will be valid locally). This is what I would do in VBA:
Sheet1.Range("A1").Formula2 = "=" & Mid(Sheet1.Range("A1").Formula2, InStr(Sheet1.Range("A1").Formula2, "'!") + 2)
Do you need help converting to C# (I don't have access to VSTO at work, where I am, but can help later)?
You can send keystrokes to another window using a tool called Autohotkey.
Ok... Still no answer. Fortunately I can now provide it for those who's looking for it. First I'll try to tell step by step what you need, and in the end I'll answer all the questions I asked.
I created a repository where I solved some problems with embedding on some platforms, and provided a template project to create a Mono-embedded app where you can write cross-platform C++ and C# code and build it for all the mentioned platforms. You can take a look at it here: NativeMonoEmbedded-App.
First you should link against the runtime native library (AKA the runtime, Mono CLR, the libcoreclr (even though this is Mono)). It's basically the runtime itself containing the known GC, JIT and other things. You also need Mono API include headers to communicate with the runtime.
You may find different names for this library:
coreclr.dll
on Windows.libcoreclr.so
on Linux.libmonosgen-2.0.so
on Android.Framework libraries (or runtime libraries) are kind of Standard Library of .NET, but in this context it's typically called FCL, BCL or something other like this. Basically it contains all declarations and the implementation of the standard library.
It consists from:
System.Private.CoreLib.dll
- the internal implementation of the core library which contains native code unlike other framework libs.System.*.dll
, Microsoft.*.dll
, WindowsBase.dll
, mscorlib.dll
and netstandard.dll
and others - framework libraries containing managed code. It contains public API interface as well as some implementation details.When you try to initialize the runtime by one of examples you can find on the internet, the runtime will first try to find the CoreLib. It's the main part of framework libs. It definitely contains native code, so should not be shared between multiple platforms/architectures.
If it's missing, the runtime will print something like this to the console:
* Assertion at D:\a\_work\1\s\src\mono\mono\metadata\assembly.c:2718, condition `corlib' not met
So you need to place it either near the executable (as all other framework libs), or you should set assemblies path via mono_set_assemblies_path()
or the MONO_PATH
env variable to tell the runtime where to look for ANY assemblies (including yours). If you have several paths, you separate them with path separators (OS specific: ;
- on Windows, :
- Linux, Android).
Besides the System.Private.CoreLib.dll
, there's other managed framework libs. It's DLLs like System.Runtime.dll
, System.Console.dll
, mscorlib.dll
, netstandard.dll
, Microsoft.CSharp.dll
. Not all of these may be actually required and loaded. It depends on the dependency chain and on what functionality you use.
There's also native libs that's required by framework libs. For Windows it may be msquic.dll
, System.IO.Compression.Native.dll
, Microsoft.DiaSymReader.Native.amd64.dll
. On Linux it's libSystem.Native.so
, libSystem.IO.Compression.Native.so
, libSystem.Globalization.Native.so
, libSystem.Net.Security.Native.so
and so on. On Android it's almost the same as on Linux but you may also find JARs like: libSystem.Security.Cryptography.Native.Android.jar
.
Such native libs must placed in the folder with all managed framework libs, or near the executable. BUT on Android native libs can be also in the libs
APK folder.
Completely optional components that's available on Linux and Android, but not Windows.
One added .NET library I used was not working on Android if I didn't add libmono-component-marshal-ilgen.so
which might have performed some IL-code generation. So sometimes you may need one of those components.
Everything required by the runtime is ready. Now place your managed and native libs. You can again place them near the executable, or in the folder added to assemblies path. Native libs you use through P/Invoke are searched near the executable, and also near the assembly containing the P/Invoke method declaration.
Though when you load libraries using NativeLibrary.Load()
, LoadLibrary()
or dlopen()
, other rules apply. Then you need to place it near the executable, or modify search paths with AddDllDirectory()
, or RPATH on Linux.
I advise you to look at NuGet packages named Microsoft.NETCore.App.Runtime.Mono.*
. They contain necesarry built runtime files for all platforms and architectures. From there you can learn what may be needed.
Download archives and look into it. In the runtimes/[arch]/lib/net[version]/
folder you'll find all managed framework libs, they should contain only managed code. Beside the *.DLL
s, there's also Microsoft.NETCore.App.deps.json
and Microsoft.NETCore.App.runtimeconfig.json
. You don't need them.
runtimes/[arch]/native/
contains native code. Here you find the runtime native lib, the CoreLib, and other important native libs required by the framework libs. Although there can be also hostfxr
and hostpolicy
native libraries, but you don't need them (this is needed for CoreCLR hosting). There's also the include
folder with the Mono API.
As I undertstand those components are optional.
Yes.
So what is required?
The runtime native lib (libcoreclr.so), the CoreLib, and the other framework libs (managed and native).
I need to place the libcoreclr.so and System.Private.CoreLib.dll near the executable?
Yes, but it depends. Typically if you statically link the coreclr.dll
(or libcoreclr.so
), you place all native libs near the executable. On Windows you could also use AddDllDirectory()
, and on Linux you could set RPATH to add shared lib search paths, so then you can place coreclr.dll
/libcoreclr.so
in other places. BUT you of course can dynamically load shared libs and then get pointers to necessary functions, which also allows you to place the runtime lib elsewhere.
System.Private.CoreLib.dll can be anywhere. It always searched near the executable. But more paths can be added to search for.
Or there's more files I need to find somewhere in the build artifacts folder?
Yes. Just corelib is not enough. As I said you need other framework libs like System.Runtime.dll
, System.Console.dll
and so on.
Also another question: so the modern Mono is just a slimmer version of the CoreCLR, but communication between my native executable and the runtime happens via Mono API? Is this understanding correct?
Yes, Mono is kind of slimmer version of CoreCLR. There's separate code for the Mono runtime which is essentially works almost like the old Mono (with its JIT and GC). Though the frameworks libs are shared between both runtimes (some specific overrides may be also present for both). And yes, your understanding is mostly correct.
Also another question: there's no static library of the CoreCLR, I have to link it dynamically? There's really no way to statically link it?
I still don't know, maybe there should be a compilation option for this. Anyway I decided to use shared libs.
Because of JavaScript engine optimizations, especially in V8:
When both operands are known constants (like two literals), the engine can apply constant folding and dead code elimination, making both == and === extremely fast — sometimes even optimized away entirely.
The loose equality == skips the type check when both operands are already the same type, which can make it marginally faster than === in tight loops involving primitives.
However, the difference is negligible and only shows up in artificial micro-benchmarks.
In real-world usage, === is just as fast or faster, and should always be preferred unless coercion is explicitly needed.
How about using a table to fill the and use text formatting:
<div style="border:1px solid #ff0000; height:100px; width:100px;" >
<table width="100%" height="100%"><tbody><tr>
<td valign="bottom" align="center">A Text</td>
</tr></tbody></table>
</div>
I ran into a similar issue when working on a JSON file which had too few indents. In my case, switching the Indent
setting from 2 to 4 increased the default indent size.
In your case, switching the Indent
setting from 4 to 2 might work?
Google OAuth2.0 Scopes page: https://developers.google.com/identity/protocols/oauth2/scopes
To access employeeId, the required OAuth2.0 scope for read only purposes is: https://www.googleapis.com/auth/admin.directory.user.readonly
For full read/write, use: https://www.googleapis.com/auth/admin.directory.user
Note that you will need to enable/add the Admin SDK API. The employeeId is part of the externalIds array in the user resource. Retrieved by calling:
GET https://admin.googleapis.com/admin/directory/v1/users/{userKey}
Response:
"externalIds": [
{
"type": "custom",
"customType": "employee",
"value": "12345"
}
]
It seems you are trying to use xlwings
as a dependency. ~/.local/bin
is intended for executables and after pipx install xlwings
the binary is located there. pipx
only supports apps to be called, not modules to be imported.
I would suggest to keep running the script within a virtual environment and to ensure that the module is properly installed via pip
.
Related question: What is the difference between pipx and using pip install inside a virtual environment?
Check this: vue-deckgl-suite, it integrates Deck.gl with Vue 3, offering declarative components and support for MapLibre, Google Maps, Mapbox and ArcGIS.
Its because of the inconsistent angular.json and package.json I assume. I removed "schematicCollections": ["@angular-eslint/schematics"] from angular.json and everything works
Fiscal year start on 01-Jul(01-07) which calendar year start on 01-Jan(01-01) so FY start after CY date about 6 months(1-7)
= DATE(YEAR(EDATE(A2, 1-7)), 7, 1)
Did you find a solution to this issue?
Remove the null. New versions only take 2 inputs.
And once i moved all the logic into the custom class, it worked.
Earlier i was trying to check some properties with default lambda expression and some with the customType. That didnt work
The other workaround i find here like this
I am not sure if this correct approach?
~~~
llm = ChatOpenAI(temperature=1.0)
class State(TypedDict):
messages : Annotated[list, add_messages]
graph_builder = StateGraph(State)
print(graph_builder)
chat_history = []
def chatbot(state:State):
return {"messages":llm.invoke(state['messages'])}
def past_message_node(state: State):
chat_history.append(state['messages'])
print("Conversation history")
past_message = chat_history
print("Message History:",past_message)
return state
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_node("past_message_node", past_message_node)
graph_builder.add_edge("chatbot", "past_message_node")
graph_builder.add_edge("past_message_node", END)
graph = graph_builder.compile()
chat_history = []
while True:
user_input = input("User: ")
if user_input.lower() in ['quit', 'q']:
print("Chat Ended")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
messages = value["messages"]
if (isinstance(messages, list)):
last_message = messages[-1]
else:
last_message = messages
print("Assistent: ", last_message.content)
~~~
docker-compose
is supported by compose verion 1.
With compose version 2 there is no need for hyphen and it could simply be docker compose
.
I had the same problem, just delete the Debug.log from C:\Users\{username}\AppData\Local\.vstools\Azurite, then try again. Apparently if the log gets too large the emulator fails to start upon debug.
Tha'ts solved with a simple code :
self.customer_widget_instance = CustomerWidget(self.customer_widget)
A colleague has just given me a clue and it turned out to be the correct answer.
When initializing the PrintServer, if you pass in the path of the machine that the printer is on as a parameter and just use GetPrintQueues() without any parameters, then you are talking directly to the networked machine that is the PrintServer, which gets the updated status correctly.
If you initialize the PrintServer without a parameter, and use the flags as I have posted, it is talking to your local machine which is not getting the updated status until it is re-polled (in this case presumably by the "Printers & Scanners" dialog)
I guess the issue is with SSR, need to inject styles as soon as possible. Here is official manual from Next, depends on your router there are 2 different solutions. If you want - post here details what version of router you have and I will help you with code examples.
Installation worked out of the box: sudo apt install gnuplot-qt
. I decided to use the qt version, since I am also very happy with the Python/Matplotlib backend QtAgg (see https://matplotlib.org/stable/users/explain/figure/backends.html)
Calling gnuplot from LaTeX worked also out of the box. In TexStudio / Options / Configure TexStudio... I use pdflatex -synctex=1 -shell-escape -interaction=nonstopmode %.tex
.
\begin{tikzpicture}
\begin{axis}[
xmin=-pi, xmax=pi,
ymin=-2, ymax=2,
title=Sine Wave,
legend pos=south east, legend style={draw=none},
]
% gnuplot script inside brackets {}
\addplot gnuplot[raw gnuplot, id=sin,mark=none,color=violet] {plot sin(x)};
\addlegendentry{sin(x)}
\end{axis}
\end{tikzpicture}
Thank you for your help.
Best regards, philipp
Possibly, since flutter_dotenv 5.0+, it tries to read .env from AssetBundle
. This means, the file should be added as asset:
in pubspec.yaml:
flutter:
assets:
- .env
Then it means, the file will be included into build! And I am actually trying to prevent exposing my .env.
Something like this might be useful.
(%i) table_form(makelist([i,i*10],i,1,5));
The output will be:
1 10
2 20
3 30
4 40
5 50
Hope it helps!
I switched back to MariaDB connector, got it working, and now it works.
If you're running on a Apple silicon, include the following argument when building your container:
--platform=linux/amd64
ex.
$ docker build --platform=linux/amd64 --no-cache -t myproject/mycontainer .
More details here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html
I have a similar issues. As an above commenter suggested I reran it without dplyr, but that did not fix the issue.
If you managed to figure this out let me know.
For further help with those trying to debug this, I found when I remove the "by =" argument it runs just fine. So there is likely either something wrong with the way the by argument is being read, or the variable that is being passed in. Unclear what, since when I examine the variable it appears to be a generic character class. I even tried forcing it to read as a character with as.character() in the argument, but that failed too (different error claiming the variable wasn't in the data).
@SudeepKS Had the same issue. Its works for me now. The solution is to remove the domLayout="autoHeight" from the ag-grid-angular tab and add some styling -> style="height: 200px;". You should be able to see the scrollbar now.
Unfortunately, there is no direct way. Why?
Because Android only writes to the call log after the call is finished, and there is no system broadcast saying "a new entry was added to the call log."
But you can Workaround in this way:
You can listen to the call state using a plugin like phone_state
, and then trigger an update after the call has ended.
Simple steps:
Listen for changes in phone status (ringing, connected, ended).
Wait 1-2 seconds on disconnected
or ended
(to give the system time to write to the call log).
Then fetch call log again.
Enjoy!
Another thing to check is whether your tests modify the current directory (for example with `os.chdir`)
When running tests from the command line directly, tests changing dir work fine. When running them from vscode's UI, it breaks things apparently.
Great, thanks! It's been helpful
from pathlib import Path
import shutil
# Move the file to a public path for download (simulated path for demonstration)
source_path = Path("/mnt/data/A_digital_photograph_captures_a_young_woman_with_l.png")
public_path = Path("/mnt/data/edited_selcan_gucci.png")
shutil.copy(source_path, public_path)
# Provide the path for user download
public_path.as_posix()
It has been a while, but this feature seems to be release with version 127. You have a look for "Automatic Fullscreen Content Setting ". More info at https://chromestatus.com/feature/6218822004768768
If you have access to the browser, you finde the feature at chrome://flags/#automatic-fullscreen-content-setting
I am facing similar issue, could you please let me know what is the solution?
When I add my laptop ip to the allow list of access restrictions of the webapp, only then I am able to access the webapp through app gateway public frontent IP. But anyone should be able to access the webapp through app gateway without adding them in the access restrictions list.
I am having no issues with 2.19.0 but the bug reports all seem to discuss Set-MgBetaUserLicense
Has anyone been able to downgrade to at least 2.25 to fix this?
Also, I am using Windows PowerShell.
CommandType Name Version Source
----------- ---- ------- ------
Function Set-MgUserLicense 2.19.0 Microsoft.Graph.Users.Actions
Run the web app in chrome and record each unique domain request in the developer tools' network tab.
This looks to have been an issue with the version of vaadin I was using, 24.6.1 . After updating to 24.7.2, it reflects over the dependency class path just fine.
Apparently, the poorly formed CSV file precludes me using Row Sampling to peel off the first row only.
you are importing createStackNavigator from "@react-navigation/stack".
import { createNativeStackNavigator } from '@react-navigation/native-stack'
if it does not fix your issue, try installing react-native-gesture-handler
The 401 error isn't necessarily from CORS, but rather from your auth endpoint. Either you have to have an unauthenticated endpoint to register, then redirect/respond with the token/auth mechanism of your choice, or you have to have a general client "secret" token that has the permissions necessary that can access the "public" endpoints like register
, login
, etc.
Go to your prisma folder and check your schema.prisma file. The generator client should give you an idea where you should look for the client:
generator client {
provider = "prisma-client-js"
output = "../src/generated/prisma"
}
I was import the client from '@prisma/client' but as you can see from the output above. The import should be from "../generated/prisma" or '@/generated/prisma/client', depending on your coding style.
that "register_sidebar" array in my previous answer-post has been working fine ever since January 30, 2023. then suddenly on April 2, 2025 all my wordpress admin & frontend pages were broken saying "Undefined variable $i
" - so i removed all the "$i"'s from the 'name' & 'id' lines and that seems to have fixed the problem.
clearnly not knowing what i'm doing but curious to know - what was the $i for & why did it work but not now?
Try llava 3.2, MiniCPM-v, these are popular VQA models
The issue was that I had "DNS hostname" setting for the VPC as disabled. Both "DNS resolution" and "DNS hostname" needs to be enabled as mentioned here: https://docs.aws.amazon.com/vpc/latest/userguide/AmazonDNS-concepts.html#vpc-dns-support
If you use custom DNS domain names defined in a private hosted zone in Amazon Route 53, or use private DNS with interface VPC endpoints (AWS PrivateLink), you must set both the
enableDnsHostnames
andenableDnsSupport
attributes totrue
.
I was having the same issue, is like its not working on recent flutter/android versions not sure of why, and many other apis have problems , i just found this plugin, and at least the example is working for me, i got the list of cast devices and the cast works, i need to still implement it on my app but looks promising.
Looks like the bug has been fixed. If you use the hashocorp/setup-terraform
GitHub Action, you can now reference the results of terraform commands through the outputs of the different steps.
Here is an example:
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Terraform Init
run: |
terraform init
- name: Terraform Plan
id: plan
run: |
terraform plan -detailed-existcode
- name: Reference Past Step
run: |
echo ${{ steps.plan.outputs.exitcode}}
Fatal error: Uncaught Exception: FPDF error: Can't open image file: ./imgs/user.png in /home/fcvsgpmy/public_html/myScreening/fpdf.php:271 Stack trace: #0 /home/fcvsgpmy/public_html/myScreening/fpdf.php(1259): FPDF->Error('Can't open imag...') #1 /home/fcvsgpmy/public_html/myScreening/fpdf.php(885): FPDF->_parsepng('./imgs/user.png') #2 /home/fcvsgpmy/public_html/myScreening/generate_admission_letters_discrete_2.php(488): FPDF->Image('./imgs/user.png', 170.5, 77.0501, 25, 32) #3 {main} thrown in /home/fcvsgpmy/public_html/myScreening/fpdf.php on line 271
Same problem with new ODBC driver 0.9.4.1186 from April 2025..
I crosspost this question on Microsoft QA forum:
Its necessary to use the following attribute:
[TextViewRole(PredefinedTextViewRoles.Interactive)] // <----- This attribute is required
Yes, this can be achieved using scheduled WebJobs on Linux. Currently, this feature is in Public Preview and being worked on the feedback being received, soon there is planning to announce it as GA feature. The WebJobs on Linux or Windows container executes inside the main app containers.
not sure if this is still a problem but one thing i did notice was the docs use the "function" keyword to build their components. It could be that the directive needs to be above the export statement. So try doing:
export default function Test(){ return null }
Call Center Lion Air O899-2872-272 via Wa
Untuk Customer Service 24 jam. Silakan menghubungi nomor +62899 2872 272. Untuk pertanyaan seputar penerbangan Anda.
Попробуй llava3.2, MiniCPM-v (могут в VQA). Это все есть на HF и ollama. С ollama легче запустить. Если нужно шустрое, то moondream2, но сама по себе моделька не сильная. Можно поискать на HF конкретно под предметную область, может кто-то дообучал. А так по классике, если опенсорс не сработает, то придется тюнить на чем-то.
Hello there from CJ mentioned as shown in the below pic that I have to refresh the currency from my store, as I followed that, then still the same issue. However, do you think this link will help me to solve that issue
Currently recommended actions for you
1. Try to turn down the Cloudflare security level.
2.Add CJ's IP to the website whitelist.
47.254.74.208
47.88.4.204
47.254.34.239
3、Contact your host and DNS provider
Materials:
https://help.redsweater.com/marsedit/humans_21909/
humans_21909=1 error in codeigniter project
It is possible in 2025, EventBridge event bus can send to Lambda functions in another AWS accounts
https://aws.amazon.com/blogs/compute/introducing-cross-account-targets-for-amazon-eventbridge-event-buses/
As someone said above origin!=site , the cookie is being sent from same site here in your case , where both frontend and backend are having same tld that is localhost (port does not matter) .
if it was a different domain like backend.com and frontend.com then it would have been considered as cross site as well as cross origin
Simple Running pod install
in /ios folder in flutter project solved my issue
Is there way by which we could run multiple functions in parallel ,without the need of switching the context using asyncio.sleep().
Or after writing, plt.savefig('N.png'),
you can continue on the next line and write plt.show()
. The error will no longer show.
You can also use torch.tolist()
>>> a = torch.zeros([2,4])
>>> a
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.]])
>>> a = a.tolist()
[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]
And index from there
I had same issue; I tried with @variables and scripting successfully but finally the fastest way to me is:
1: add an "(int) value" column theintcolumn to the table yourtable
2: maybe you have to fill theintcolumn with zero values, if needed "update yourtable set theintcolumn=0;"
3: drop the primary key attribute of your previous primary key (thepreviouskey)
3: re-declare theintcolumn, now as primary key and autoincrement; it will fill automatically
4: you can do whatever you prefer up this point:
4:A: drop thepreviouskey column and rename theintcolumn to thepreviouskey ; then optimize yourtable. Fastest.
4:B: set thepreviouskey = theintcolumn; and then drop theintcolumn; but after that don't forget to re-assign primary key and autoincrement attributes to thepreviouskey column!
Sorry, off topic rant....
Databutton is shit. Not because of their AI or that the AI will chew through your credits if you're not very specific in your language when entering a prompt to solve a problem, but because of the business practices of the owners.
Each time the AI writes code, you spend a credit. Credits are available via subscription. However, I bought the $200 monthly subscription because I was being careless and spending credits quickly through bad prompts. The $200 provided me with 1,000 credits. I used about 250 credits during my first month. When my subscription renewed - AND I WAS CHARGED $200 AGAIN - I was only "refilled" to 1000 credits for my $200. My first 1,000 credits cost $0.20 each. My second month's credits cost $0.80 each. A 400% increase in cost per credit!!
Terrible business practices like this haven't been seen (at least in my experience) since the cell phone companies of the late 1990s and early 2000s. Cell phone companies would allow you to have X minutes per month on a "use it or lose it" basis. The difference is that the cell phone companies told you up front that the minutes expired. DATABUTTON ONLY DISCLOSES THAT THEIR CREDITS DO NOT ROLL OVER WHEN YOU COMPLAIN. IT IS NOT STATED ANYWHERE PLAINLY ON THE SUBSCRIPTION SIGNUP PAGE!
If you want to get AI to help you code, find another website. Do not give Databutton any of your money. IMHO, they're scammers who deserve to go out of business.
And yes, I complained to them. They only saw fit to reinstate 300 of my lost 720-750 credits.
This is GENIUS and EXACTLY what I needed! I put formulas in for columns ABDE to make it balanced and white backgrounds on ABDE... then i can use columns B&D for labelling each side of the bar and custom values in data labels for BCD means i get THREE sets of custom labels, correctly balanced and spaced... THIS IS AMAZING and something that's been mulling around in the back of my mind for years on how to do this... just stumbled on this solution... AMAZING!!!!
enter image description here
This rule should be disabled, as others have mentioned. This rule is overzealous and should be removed.
Per MDN:
https://developer.mozilla.org/en-US/docs/Web/API/Element/click_event
"The event is a device-independent event — meaning it can be activated by touch, keyboard, mouse, and any other mechanism provided by assistive technology."
Hi @user861594, is there any version of QAF-Cucumber would support Cucumber 7
IMHO it's good to do both. Just don't rely on them as your only defense. The blacklist will help filter out some problems. The whitelist will improve quality of what remains. After the 2 filters have done their job you have a slightly less murky mess left to deal with. Run scans on what is left to clean it up a bit. What is left will still not necessarily be trustworthy but it won't be raw sewage anymore if you had good whitelist/blacklist/anti-malware. Copy what's left to media and transfer it to an airgapped machine to work with so you don't contaminate the original machine when you open it up.
You should describe your field more precisely. It's hard to suggest something, when you need to predict "array with variable length". For example, time-series prediction uses it's specific technics. So, pls give some info about your data.
Also, for regression you'd keep data normalized. If you use -100 in output, while other values would be very small like 0.1 it may lead to exploding gradient problem. Big values will affect on loss fn very much. So, if your goal is predicting vector try to use zero-padding.
There was actually a bug in the spring-data-jpa. It was fixed recently in version 3.4.3
https://github.com/spring-projects/spring-data-jpa/issues/3762
For iOS developers, utilizing a VPN is essential to ensure secure connections, protect sensitive data, and maintain privacy during development and testing phases. VPNs help in accessing geo-restricted content, safeguarding against potential threats on public Wi-Fi, and simulating different network environments.
When developing VPN applications for iOS, it's crucial to understand the underlying technologies and best practices. Apple's Network Extension framework provides the necessary tools to create custom VPN solutions, allowing developers to manage VPN configurations and connectivity on iOS devices.
Choosing the right VPN protocol is also vital. Protocols like IKEv2/IPsec offer strong encryption and are well-supported on iOS, ensuring both security and performance.
If you're looking to develop a VPN app or need expert iOS developers to assist with your project, consider exploring our services. Our team specializes in crafting secure and efficient iOS applications tailored to your needs. Learn more at MagicFactory Tech.
You need a custom AL-Extension and there you need to create Tableextension, List/Card-Extension.
Its not that deep, but you need to know some programming basics.
Here are some helpfull links:
Get started with AL
Build your first sample extension with extension objects, install code, and upgrade code - Microsoft
To move an App Service from one App Service Plan to another, both plans must be in the same resource group and region. Since your App Service 3 is in Resource Group 2 but uses App Service Plan 1, you cannot directly move it to App Service Plan 2 if they are in different resource groups. The "Change App Service plan" feature only lists plans within the same resource group.
For more details, refer to the official documentation.
For Windows Users:
Powershell opens by default in VS code.
RUN:
python -m venv venv
RUN:
.\venv\Scripts\Activate.ps1
For Linux/Mac Users:
Zsh (Unix shell) opens by default in VS code.
RUN:
python3 -m venv venv
RUN:
source venv/bin/activate
The attributes listed in the input properties are those displayed by default in the user profile. Make sure that the custom attribute you created is displayed by default in the user profile.
Look at Discriminator loss, it is constant. This means that it maybe overfitted or predicts just one class (for ex. all fake). It leads to problems with generator loss etc.
There are plenty of problems training GANs. Try to look at Discriminator preds at first.. Also this result is very data-dependent. Let me know about discr. preds :)
And sometimes you need just restart training process and everything will be fine. So, also, try to restart training with different optimizer learning rates. LR is very important here!
I've seen similar issues where long-running Node processes slow down after many calls. Can you check the following:
HTTP Agent Settings: The AWS SDK uses Node’s built-in HTTP/HTTPS agent, which by default enables keep-alive. Over time, stale connections might build up. One workaround is to turn off keep-alive:
const https = require('https');
const agent = new https.Agent({ keepAlive: false });
Resource Leaks: A long-lived process might suffer from minor memory or socket leaks that add up. I’d suggest profiling your application to see if memory usage or the number of open sockets increases over time.
SDK Version: Make sure you’re using the latest version of the AWS SDK. Older versions have had issues with connection handling that might be causing this slowdown.
SWF Workflow History: If your workflows accumulate a large number of events, it can slow down processing. Could you check if limiting the history size improves the response time?
Thanks for this solution, it works very good. Unfortunately, i have a small issue here. In my matrix in the column A in some cases i have the same value (for eg. "ROWHEADER1") but with different information in the value range B1:F5. Do you have a solution also for this scenario?
Thanks
I wish you a nice day,
Alina
If we are able to connect to ODI 12c to Snowflake, now we want to pull data from Oracle database, sql server db, flat files, msaccess and need to push this data to Snowflake tables, does any specific Odi knowledge available for Snowflake
What volume of data can push from source to Snowflake target table
Is there any performance issue while pushing data to Snowflake
Any limitations Of ODI 12 c with Snowflake, assuming JDBC driver connectivity
Regards
Mangesh
zipWithIndex
for precise batchingrdd = df.rdd.zipWithIndex()
batched_rdds = rdd.map(lambda x: (x[1] // batch_size, x[0])).groupByKey().map(lambda x: x[1])
batched_dfs = [spark.createDataFrame(batch, schema=df.schema) for batch in batched_rdds.collect()]
I'm doing the same. Did you succeed?
Hi. I'm doing the same. Did you succeed?
If I read the documentation, it seems to imply that it only updates the timestamp, not the time zone. However, I tried to do it in 2 steps and still didn't work so I'm just adding this for completion.
I tried a first step where the time stamp shifts (as done in previous answers) and then a second step where I kept source and destination time zone the same. Thinking the formatting would take the timezone of the source. But no, the formatting still gives +00:00
in a second test in the second step I gave a utc timezone as the input date and it converted the timestamp to RST but then again the format in the output is +00:00
result:
So the base time, if specified as a diff timezone from the source time zone, already gets converted and then you have a second time conversion to the destination time zone. But none of this changes the time zone, which is kept as UTC
It is very hard to interpret loss in huge variety of situations. GANs is one of these cases. You hardly can just look at G and D losses and say, yeah, this model is great.
But you need to estimate model. So, I have very simple solution. Just generate a batch of images and plot them every N epochs. Also save model weights. If the quality of images is good, then stop training and use model weights from last checkpoint.
Another thing can use Early Stopping Callback idea. If not improve for N epochs - stop.
Also, during experience of many researchers some common bounds for dcgan were estimated: G_loss 2 and D_loss 0.1.
By the way, training process for GANs is very unstable. There are some technics to stabilize model training.
So, I highly recommend visual estimation approach :)
Further to @mathias-r-jessen comment (which looks to be the problem), you can ensure that the database query isn't the issue by replacing your database query. Try changing:
string query = "Select * from number where number > 100";
To
string query = "SELECT 110 AS number UNION SELECT 130 AS number";
This will return exactly two rows - so if you see two rows with this your issue will be the db query. As already suggested this is something that a debugger would really help to understand though.
I am having same issue.
Did you manage to solve it ?
With bash and Airflow CLI
airflow dags list | awk -F '|' '{print $1}' | while read line ; do airflow dags pause $line ; done
Hello everyone and thanks for the support.
I solved the problem: I was to upload at least 5 documents reason why the uploading was stopped.
I had mistakenly uploaded only 1 document.
Thanks,
GF
To avoid this headache (and many similar ones) I highly recommend using the OpenRewrite Spring Boot Migrate recipe
I know it's been three years since Spring Boot 3.0.0 was released, but I'm only just now dealing with the upgrade from 2.7. I was able to use OpenRewrite to upgrade from 2.7.18 to the latest 3.3.x version and it completely automated away the javax -> jakarta
migration among many other tedious tasks.
Differences:
Triggers: Azure Functions offer additional out-of-the-box triggers, making them more versatile for event-driven scenarios.
Scalability: Functions automatically scale with Consumption or Premium plans, while App Service requires manual scaling configuration
Scalability:
Consumption/Premium Plans: Functions scale automatically based on demand, without additional configuration
App Service Plan: Hosting Functions in an App Service Plan can limit scalability if scaling is not configured
For more details, refer to the official documentation