I'm encountering the same problem. Any luck solving this?
Have a look here: https://support.google.com/webmasters/thread/322653126?authuser=1&hl=en&msgid=324095844
We're having identical issues, no closer to solving them, but found some similarities. Where are you hosted?
Just add one more parameter called 'animate':
let newFrame = NSRect(origin: newOrigin, size: newSize)
window.setFrame(newFrame, display: true, animate: true)
"cloud url (litedev.eu.hikconnect.com)"\n"Device Serial Number"\n"Device verificationcode"
This is format that Hikvision uses for their QR code generation.
As those are also needed when using OpenApi and SDK from hik-partner pro or hik-connect
Did you find solution for this, any update? I have same problem, I updated v18 to v19 and only the app.component is SSR and some pages that have only HTML. For me is important to load the full route for SEO.
Since 2020 there is also Simple Queue Plugin , that adds "move higher / lower / to top / to bottom" to the queue entries.
It also enables CLI access to the queue handling to some extent.
It turns out that all of my containers were fine and not overlapping - it was due to the fact container 1 had very large font which overflowed the border and gave the appearance of overlapping the other elements. I noticed this after putting a red border around the affected elements and seeing that all the containers were stacked nicely as in the snippet. So, I defined the width and height of container 1 and made it large enough to fully contain the large text, and that pushed containers 2, 3, and 4 downward, as desired.
If you can't do it in sciplot, you could try morphologica, which has a QuiverVisual
class for plotting vector fields. It's 2D GraphVisual
class is also able to do this. A short example program that draws three quivers for a three element vector field is:
#include <vector>
#include <morph/vec.h>
#include <morph/Visual.h>
#include <morph/ColourMap.h>
#include <morph/QuiverVisual.h>
int main()
{
morph::Visual scene(1024, 768, "morph::QuiverVisual"); // Create 1024x768 pix window
// Define a vector field
std::vector<morph::vec<float, 3>> coords = { {0,0,0}, {0,1,0}, {1,0,0} };
std::vector<morph::vec<float, 3>> quivs = { {0,0,1}, {0,-.3,1.1}, {-.3,0,1.2} };
// Create the QuiverVisual VisualModel with make_unique
auto vmp = std::make_unique<morph::QuiverVisual<float>>(&coords, morph::vec<float, 3>{0}, &quivs,
morph::ColourMapType::Viridis);
scene.bindmodel (vmp); // boilerplate - wires up callbacks
vmp->do_quiver_length_scaling = false; // Avoid scaling the quiver lengths
vmp->finalize(); // builds the OpenGL vertices
scene.addVisualModel (vmp); // Adds the QuiverVisual to the scene
scene.keepOpen(); // Render until user quits with Ctrl-q
return 0;
}
Looks like: A screenshot of the morph::Visual window displaying a QuiverVisual (and the scene coordinate arrows)
For more options/example code see:
https://github.com/ABRG-Models/morphologica/blob/main/examples/showcase.cpp#L335
and
https://github.com/ABRG-Models/morphologica/blob/main/examples/quiver.cpp
Use this one
npm install [email protected] [email protected] --legacy-peer-deps
and then this
npm install --legacy-peer-deps
if didn't help this
rm -rf node_modules package-lock.json
npm cache clean --force
npm install --legacy-peer-deps
I think the best way is this one:
Some parent reducers are a bad idea. It is not clear what reducers can change our model. Making just a "changeItem" slice is not convenient or clear, either.
Is there a case where using type witness is absolutely needed?
Prior to Java 8 yes. There would be the error for calling snippet code:
void processStringList(List<String> stringList) {
// process stringList
}
And then if you call this method with collection.emptyList() compiler could not infer the type.
processStringList(Collections.emptyList());
The error would be List<Object> cannot be converted to List<String>
So you had to write it this way:
processStringList(Collections.<String>emptyList());
In JDK 8 and after compiler can infer it base On the method definition that the type in String. Then this will work:
processStringList(Collections.emptyList());
For more information check the Type Inference
I think I found it:
https://pingouin-stats.org/build/html/generated/pingouin.pairwise_tukey.html
This allows you to specify the effect size; one option is eta-square.
Here is described an utility, that can help with sharing the data:
Parent reducer makes logic not clear - it's more difficult to watch all of the reducers which change our model, another approach is here:
Instead of making openAppWhenRun dynamic, store a flag in UserDefaults I think this way you will prevent the the shortcut from triggering
What do you think about this approach?
Instead of thunks, you can also use this approach:
Had same problem caused by someone on repo using force when pushing to branch, if no changes have been made use git fetch origin and git reset --hard origin/.
About sharing state between reducers - you can use redux-patch-action-middleware
- here is the idea:
Load the permissions when the application starts (in App.js probably) and store it in either context or use any state management library like Redux or Recoil.
From above my screen shot I can say that library has fixed the issue you can try the latest version but it seems that the problem is with the flutter sdk version that you are using can you tell me what is your flutter and dart version ?
What does your doPCSearch method look like? Normally what I've seen is in the route definition, the .process() call instantiates a Processor (a class implementing the Processor interface), and there you could define your ObjectMapper as a class variable and use it in the process() method.
In case of using Tailwind CSS library
You can set alpha value to 50% this way:
.my-class {
background-color: theme('path.to.color' / 50%);
}
docs: https://v3.tailwindcss.com/docs/functions-and-directives#theme
You have to make it session-based instead of users. Then you will get percent of total (100%).
use this version
"react-native-gesture-handler": "2.20.2"
and your issue well be gone .
this is this official fix
To prevent the keyboard from hiding in Flutter when clicking submit, you can use a FocusNode. Keep the focus on the text field by calling FocusScope.of(context).requestFocus(focusNode) inside the submit function. This prevents the keyboard from dismissing when the submit button is pressed.
1.BDT-883301527 2.Trx id-73N6J06O 3.client No-01843273649 4.Agent No-01828-185684 5.Time&date-today,12-27pm 6.amount-300
If you want to see the data or images in draft mode in your Expo project even if it's not published, just go to the project settings, then to the Content API. There, you can find the content delivery option—just turn it on to "Draft."
When you Hire WordPress Developers for plugin development, the level of access they require depends on the complexity of the project. Below is a detailed breakdown of what access a WordPress plugin developer might need:
The developer needs administrator access to install, activate, and configure plugins. This access allows them to test how the plugin integrates with your site. They can also troubleshoot compatibility issues with themes or other plugins.
If the developer needs to upload, modify, or delete plugin files directly, they require FTP/SFTP credentials. This is important for debugging issues that cannot be fixed via the WordPress dashboard. Secure access should be granted with a separate user account to prevent security risks.
Some plugins require custom database tables to store data efficiently. The developer may need access to phpMyAdmin or direct SQL access to create, update, or optimize database structures. If database access is needed, ensure the developer has limited privileges to avoid accidental data loss.
Developers often need access to the built-in WordPress theme and plugin editor to modify existing code. However, this should be granted only if absolutely necessary, as direct code changes can impact site stability.
If the plugin interacts with third-party services (such as payment gateways or APIs), the developer may need API keys and integration details. Proper documentation should be provided to ensure secure API implementation.
Instead of providing direct access to a live website, a staging environment is highly recommended.
Developers can test the plugin in a controlled setting without affecting the live website.
Once testing is complete, the plugin can be deployed to the main site.
Security Measures When Granting Access
By carefully managing access, you can ensure a secure and efficient plugin development process when you Hire WordPress Developers for your project.
Your app does not come again to the foreground because i think your data field is not set up properly. Try using this scheme :
instead ofDid you also try in order to test and see which part of the data is failing to use only the Host and the scheme definition ?
The link is not catch by your app otheriwse it would come to the foreground.
Let me know
const http require('http');
const nome do host TxxTY.aternos.me';
porta const 59951;
const server = http.createServer((req, res) { res.statusCode = 200; res.setHeader('Content-Type', 'text/plain'); res.end('Olá Mundo'); });
server.listen(porta, nome do host, () => {
console.log("Servidor em execução em http://${hostname}:${port}/'); });
Alternatively you can get this error because of a wrong version number and not a wrong id number. As per this answer https://stackoverflow.com/a/73917597/24058693
use version number, and only the number, e.g. "1" -- Do not use "v1.0" or "Version 1 on Oct 1, 6:10 AM" or your deployment's Description
Id and Version number have to match otherwise you'll get an error on both.
The problem is that the URL you have ('/url/to/OtherComponent.js') is dynamic and could be fetched from an API. React.lazy can't directly handle dynamic URLs in this manner because it needs to resolve the module ahead of time.
Here's how you can handle this:
Ensure that the URL is known before passing to React.lazy: You can wrap the logic in a state or use a hook to fetch the URL using Fetch API, and only call React.lazy after the URL is known.
Use dynamic imports once the URL is available
React.lazy only accepts a dynamic import function, which works with static module paths.
for k in dbl.search("CN=admin,CN=Users,DC=nulex,DC=test"):
for i in k.items():
print(i)
modmsg = ldb.Message()
modmsg.dn = k.dn
# for delete if exists
modmsg.add(ldb.MessageElement(elements=['@@@@@111111'], flags=ldb.FLAG_MOD_DELETE, name='@IDXGUID'))
# for replace value
modmsg.add(ldb.MessageElement(elements=['@@@@@222222'], flags=ldb.FLAG_MOD_REPLACE, name='@IDX_DN_GUID'))
# for add new value or append values
modmsg.add(ldb.MessageElement(elements=['@@@@@444444'], flags=ldb.FLAG_MOD_ADD, name='@IDX_DN_GUID'))
dbl.modify(modmsg)
Use this code in table class definition:
@ColumnInfo(name = "ColumnName",defaultValue="0")
int ColumnName;
You have to first register your provider: "Microsoft.Web" for the relevant subscription. Go to the Resource Provider under Subscription Settings and type "Microsoft.Web" and click on "register".
Some general more involved steps:
Reference Link: Resource providers and types
I achieve to have a single file working using additionnal directives in the csproj file :
<PropertyGroup>
<IncludeNativeLibrariesForSelfExtract>true</IncludeNativeLibrariesForSelfExtract>
<IncludeAllContentForSelfExtract>true</IncludeAllContentForSelfExtract>
</PropertyGroup>
There's no actual duplication on your commands, what you have to do is to encapsulate that db query logic into repository method and call that method in both command handlers, for example:
var user = UserRepository.GetUserByEmail(command.EmailAddress);
...user logic...
UserRepository.update(user);
Either read the whole line as one string and then cut it in a transformer (with e.g. the field function), or use the "Hierarchical Data" Stage to interpret your JSON string.
I am upgrading gradle but all articles or solutions of this claim they can fix this yet they still just show the process of doing the java 17 way. Oracle provides default java sdk 21 and 23. So shouldn't this be the standard and it is already a year and the solution to have smooth upgrade is not out at all
Solved the problem here by simply reloading VS Code. Handy extension for this is called, appropriately, Reload
.
Its a simple solution. in vscode terminal run flutter config --jdk-dir "C:\Program Files\Java\jdk-23" Change the Path to the path where your Java is installed, and which jdk you have. This will allow you to then run flutter doctor --android-licenses
You do have to make sure you have setup your enviroment variables under system variables. Again, be sure to change the location depending on your Java installation.
JAVA_HOME C:\Program Files\Java\jdk-23
Path = C:\Program Files\Java\jdk-23\bin
I had the same issue when i upgraded from wix3 to wix4. My issue was simply invalid directory. Make sure the file to be copied is in the correct path provided.
They have had constant outages of the api for the last 2 weeks. It has become unusable. Check the service page https://status.deepseek.com/ while it is currently green that only indicates it is up not that you will get a reliable response. That is the reason for the Degraded Performance marker. the deepseek-chat will give an api response less than 50% of the time.
I made it a lot easier...
for (let number = 1; number < 101; number++) { if (number % 15 === 0) console.log("FizzBuzz"); else if (number % 3 === 0) console.log("Fizz");
else if
(number % 5 === 0) console.log("Buzz");
else console.log(number);
}
Have you fixed this error? I have the same one
Issue: Tailwind CSS IntelliSense Not Working in VS Code (v4)
If you're using Tailwind CSS v4 and noticing that IntelliSense isn't suggesting class names properly, the issue might be related to the "tailwindCSS.experimental.configFile" setting in VS Code's settings.json.
Solution The simplest and correct fix is to remove the following line from your settings.json:
"tailwindCSS.experimental.configFile": "./tailwind.config.js"
Why does this fix work?
In older versions of Tailwind CSS, this setting was used to manually specify the config file for IntelliSense. However, with Tailwind CSS v4, VS Code's Tailwind extension automatically detects the config file, making this setting unnecessary. Keeping it might cause IntelliSense to work incorrectly or incompletely.
Steps to Fix:
1.Open VS Code.
2.Press Ctrl + Shift + P → Search for "Preferences: Open Settings (JSON)".
3.Find and remove the following line:
"tailwindCSS.experimental.configFile": "./tailwind.config.js"
4.Restart VS Code.
This could be due to several reasons. But, for me, it turns out to be firewall issue. Disabling firewall on the agent machine solved the issue.
For me it turns out to be firewall issue. Disabling firewall on the agent machine solved the issue.
Look at this for inspiration: https://github.com/docker/genai-stack/blob/main/docker-compose.yml
The issue might be NEO4J_URI=bolt://172.20.0.3:7687
, try NEO4J_URI=neo4j://neo4j:7687
which OS you are using? if you are using Ubuntu try the following cmd
sudo netstat -tulpn | grep 80
otherwise map the container port to a free port (example 8888) by adding
-p 8888:80
All the previous library provided before are partially dead, maybe opting to use this library https://github.com/jazzband/django-fernet-encrypted-fields, offers a maintained, drop‑in solution for automatically encrypting and decrypting text fields in Django.
This will work if none of the other methods worked for you!:
python3 -m pip install opencv-python
or
python3 -m pip install opencv-python
I'm not sure I can explain the problem or error, but here is what I had to do to get rid of the error:
I have two resx files one for the default language english and one for danish. I had to manually edit the resx file that hold the danish language from:
The danish resx file when the error message is still there
to:
Ensure that your exception handling is set. something simple like this:
app.UseExceptionHandler(exceptionHandlerApp
=> exceptionHandlerApp.Run(async context
=> await Results.Problem()
.ExecuteAsync(context)));
Okay this is not an issue but it is actually a feature of Visual Studio, it was giving me red squiggles because I added the method under UNITY_EDITOR directive and was using it in a method that was not under that directive
Add this mysqli_query($koneksi,"SET NAMES utf8;");
cool. this code veru nice, relli nice
Try to increase maxSendMessageLength
and maxReceiveMessageLength
No if they receive from different ports, you can bind to only one port when calling bind().
"Hola, entiendo tu duda. El mandato start en el contexto de Remix generalmente se utiliza para ejecutar la versión construida de la aplicación en un entorno de producción. Es decir, inicia el servidor con los archivos generados después de ejecutar build. Por otro lado, el mandato dev se usa durante el desarrollo para iniciar un servidor que recarga automáticamente los cambios realizados en el código.
En resumen:
dev: Ideal para desarrollo, permite trabajar con actualizaciones en tiempo real.
start: Usado en producción, ejecuta la aplicación con los archivos ya compilados.
I'm facing same exact issue - how did end up you resolving it?
You can find several open and closed issues in UPS' github API docs repository. https://github.com/UPS-API/api-documentation/issues/39
My understanding is that delivery photo is not yet implemented, and POD is a HTML without the pictures.
I think you're using wrong approach to use JSON_EXISTS
, please try the below one:
select
data
from test
where JSON_EXISTS(
data::jsonb, -- convert your data to jsonb first
'$.criteria.employee_id.in' -- corrected path of your expression
);
Editing an existing .gitignore
, added a filetype (e.g. *.myFileType
) but didn't realise there was a preceding space. All file types in the area of the file I edited had a preceding space, and the IDE automatically added a preceding space to my entry...
I have the same issue. I got this issue when I deployed on Cpanel, but my local development works fine. You should update your next version from 14.0.3 to 14.2.14 to solve this issue. In my case, I had to use 14.0.1 when I updated ready the issue was gone.
Hello mam how are you I am looking for a polite and humble girl for marriage. I am looking for a girl who is not addicted to drugs. If anyone is there, please marry me and bring me to Canada. Here is my email address.Please message [email protected]
app.use("/api/v1/tasks", taskRoutes);
should be
app.use("/api/v2/tasks", taskRoutes);
How to Deploy Loki with Persistent Storage on AWS EFS Using Helm and Custom Security Context**
Question:
I'm trying to set up Loki for log aggregation using AWS EFS as persistent storage in my Kubernetes cluster. I followed the steps to configure Loki with EFS-backed persistence, but I encountered several issues along the way. Here's a detailed overview of the process, including my values.yaml, Helm command, and the key steps involved.
Can someone provide insights or improvements on how I can ensure my Loki container pods are using the EFS volume and have appropriate security permissions to access and write to the persistent storage?
Here’s a step-by-step guide based on my experience setting up Loki with AWS EFS using Helm and custom security settings. I'll also explain some of the important details and configuration steps in the values.yaml
file.
Before starting, you need to ensure that your AWS EFS file system is properly set up and mounted. Here's the command I used to mount the EFS to the /data/loki
directory:
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-0bbf29876ed6298ee.efs.us-east-1.amazonaws.com:/ /data/loki
To ensure Loki has proper permissions to write to the EFS directory, I created a separate user for Loki (in my case, I used loki
with user ID 1002
). I then added the user to the sudoers
file to grant required privileges. Here's the process:
Create a Loki user with ID 1002:
sudo useradd -u 1002 loki
Add the Loki user to the sudoers:
sudo visudo
# Add the following line to allow 'loki' to perform necessary file system operations
loki ALL=(ALL) NOPASSWD: ALL
Create /data/loki
directory and set permissions for Loki user:
sudo mkdir -p /data/loki
sudo chown -R loki:loki /data/loki
values.yaml
ConfigurationBelow is the key configuration for values.yaml
that I used to set up Loki with persistent EFS storage.
test_pod:
enabled: true
image: bats/bats:v1.1.0
pullPolicy: IfNotPresent
loki:
enabled: true
isDefault: true
url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
readinessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
livenessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
persistence:
accessModes:
- ReadWriteOnce
annotations: {}
enabled: true
existingClaim: loki-pvc-now
storageClassName: "efs-sc"
mountPath: /data # Mount path for EFS
subPath: "/mike" # Optional subPath
securityContext:
runAsUser: 1002
runAsGroup: 1002
fsGroup: 1002
logDirectory: /data/mike
storage:
chunks:
directory: /data/chunks
indexes:
directory: /data/indexes
initContainers:
- name: init-fs
image: busybox:latest
command: ["sh", "-c", "sudo mkdir -p /efs/chunks /efs/indexes && chown -R 1002:1002 /data"]
volumeMounts:
- name: loki-storage
mountPath: /data/mike
promtail:
enabled: true
config:
logLevel: info
serverPort: 3101
clients:
- url: http://{{ .Release.Name }}:3100/loki/api/v1/push
grafana:
enabled: false
image:
tag: 8.3.5
proxy:
http_proxy: ""
https_proxy: ""
no_proxy: "loki"
values.yaml
Settings:Persistence Configuration:
efs-sc
as the storageClassName, which corresponds to the EFS storage class.mountPath
is set to /data
(this is where the EFS file system will be mounted inside the Loki container).Security Context:
loki
user (with UID 1002
), and we grant it access to the /data
directory through the fsGroup
setting.Init Containers:
init-fs
container initializes the directory structure inside /data
(such as chunks
and indexes
) before the main Loki container starts. It also ensures that the correct permissions are set for the directories.SubPath and Mount Path:
/mike
) in the mountPath
to isolate Loki data. You can adjust this based on your specific needs.To install the Loki stack with the above values.yaml
, I used the following Helm command:
helm install loki grafana/loki-stack --namespace=monitoring -f values.yaml
loki
user has proper permissions (read/write) for /data/loki
and subdirectories. Misconfigured permissions are a common source of issues.securityContext
settings. If the runAsUser
, runAsGroup
, or fsGroup
aren't set correctly, the Loki container may not have the required permissions to write to the mounted EFS directory.By following these steps, you can successfully deploy Loki with AWS EFS as persistent storage. If you face any issues, carefully check the EFS mounting, Loki container permissions, and Helm values to ensure everything is set up correctly.
Yes, it was not clear I attach the entire code that it works until you are not embedding an html file.
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email import encoders
import pandas as pd
import sqlalchemy
import smtplib
from jinja2 import Template
# Read the Jinja2 email template
with open("C:/tmp/plotly_graph.html", "r") as file:
template_str = file.read()
jinja_template = Template(template_str)
import glob
folder_dir="C:\\Users\\n.restaino\\PycharmProjects\\pythonProject\\.venv\\"
image_path="C:\\Users\\n.restaino\\PycharmProjects\\pythonProject\\.venv\\image1.png"
if __name__ == "__main__":
# Connection details
# Connection details
user = 'a'
pw = 'a'
host = '0.0.0.10'
port = '1521'
db = 'a.a.com'
engine = sqlalchemy.create_engine('oracle+cx_oracle://' + user + ':' + pw + '@' + host + ':' + port + '/?service_name=' + db)
my_query="""SELECT sum(FTR_VALORE) val, AN_RAGSOC1 cli FROM fatrig JOIN ancf ON cf_cod=ftr_clfo
WHERE ftr_ese='2025'
GROUP BY AN_RAGSOC1
ORDER BY sum(ftr_valore) desc
FETCH FIRST 20 ROWS ONLY"""
df = pd.read_sql(my_query, engine)
ax = df.plot.bar(x='cli', y='val', rot=60, figsize=(30, 20))
fig = ax.get_figure()
fig.savefig(image_path)
email_user = '[email protected]'
password_user = '1111'
email_send = '[email protected]'
subject = 'Python'
msg = MIMEMultipart()
msg['From'] = email_user
msg['To'] = email_send
msg['Subject'] = subject
email_content = jinja_template.render(msg)
msg.attach(MIMEText(email_content, "html"))
body = """<h1> Sales Report </h1> {df.to_html()}
<img src="cid:image1">
{% include "C:/tmp/plotly_graph.html" %}
"""
msg.attach(MIMEText(body,'html'))
msgRoot = MIMEMultipart('mixed')
msgAlternative = MIMEMultipart('mixed')
i=1
# iterate over files in
# that directory
for images in glob.iglob(f'{folder_dir}/*'):
# check if the image ends with png
if (images.endswith(".png")):
attachment =open(images,'rb')
part = MIMEBase('application','octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Disposition',"attachment; filename= "+images)
part.add_header('Content-ID', 'image1' )
i = i + 1
msg.attach(part)
part = MIMEBase('application','octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Disposition',"attachment; filename= "+images)
msg.attach(part)
text = msg.as_string()
server = smtplib.SMTP('smtp.gmail.com',587)
server.starttls()
server.login(email_user, password_user)
server.sendmail(email_user,email_send,text)
server.quit()
Let me know if I can add other details
Thanks in advance
bro what the fuck does that mean ?
@bot.message_handler(content_types=["contact"])
def contact_messages(message):
bot.send_message(message.chat.id, "Contact received. Phone number: " + message.contact.phone_number, reply_markup = back)
In my case I just change its location, like under the messages package and then put it back to previous place src/main/resources, then it worked.
Did you find a solution or get the above code working? I’m facing the same error and crash.
I created an IR on the DEPT sample table. The applied a highlight, saved the report as "Default Report Settings, Primary Report" and click the "refresh" browser button. The report shows the highlight so i'm not able to reproduce your issue. Can you reproduce this using the sample emp/dept table ?
You can obtain the metadata of any object using the function present in boto3.Client
get_object_attributes
Read more about it at: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/get_object_attributes.html
Came across this error while changing docker image from ubuntu to alpine. Explicit CGO disable during build helped.
For manual launch:
CGO_ENABLED=0 go build .
My case with Dockerfile:
FROM golang:1.22
# ...
ENV CGO_ENABLED=0
# ...
RUN go build
Have elements as flex or grid, this will help you to have proper margins and they will not collapse.
Example box Example boxbased on the answer i got from GitHub
import langsmith as ls
with ls.tracing_context(project="foo"):
...
I'm not sure about the state of things mid 2021, but as of 2025, the answer of @mnist is incorrect. It is very well possible to set an input in a child module using shiny::testServer()
:
testServer(summaryServer, args = list(var = reactiveVal(1:10)), {
cat("var active?", d_act(),"\n")
range <- session$makeScope("range")
range$setInputs(go = 1)
cat("var active?", d_act(), "\n")
})
should do exactly what was asked for.
There is a minor issue with the provided example code though: rangeServer ()
should return var
, e.g. as
rangeServer = function(id, var){
moduleServer(id, function(input, output, session){
# when button gets clicked
eventReactive(input$go,{
range(var(), na.rm = TRUE)
}, ignoreInit = TRUE, ignoreNULL = TRUE)
})
var
}
given that in summaryServer()
, we assign the result of rangeServer()
to range_val
.
I have the same problem but links don't work.
I vote AVL!, I have some different perspectives on this issue. From an aesthetic standpoint, AVL trees are more visually appealing, and their balancing principles are more straightforward. Red-black trees, on the other hand, introduce the concept of color, using color changes to replace some rotation operations. However, rotation operations are not significantly more complex than color changes, especially during deletion, where red-black trees also require extensive case discrimination, which detracts from the elegance of the code.
The mainstream view is that red-black trees perform better than AVL trees in terms of insertion and deletion. However, few people have implemented both red-black trees and AVL trees in the same language, with similar styles and approaches, and conducted systematic testing and comparison.
I implemented both red-black trees and AVL trees using numpy and numba, and tested them by sequentially inserting and then deleting 1e7 random float numbers. The results showed that AVL trees had a slight advantage.
For comparison, I also included the STL Set with O3 optimization and Java's TreeMap. The comparison results are as follows:
1e7 random float push/pop one by one
Test Object | Link | Insertion Time | Deletion Time |
---|---|---|---|
My AVL Tree | Link | 6.4s | 6.4s |
My RedBlack Tree | Link | 6.7s | 6.9s |
STL Set | Link | 7.6s | 8.2s |
Java TreeSet | Link | 7.3s | 10.3s |
I believe this issue requires more implementations and testing, as factors beyond theoretical computational complexity, such as cache-friendliness, can significantly impact performance. Even when considering theoretical performance, it is essential to record in the code the number of branch selections, node operations, and color changes, rather than making assumptions.
And I had post another question: Does red-black tree really have advantages over avl tree?
Can you share your system resources?
how are you? AWS provides a mix of global and regional services, and it's important to differentiate between them. Here's an updated overview of some of the main global services
Real Global AWS Services:
Other Global AWS Services:
Amazon SES (Simple Email Service) – Although SES uses specific endpoints, it is considered a global service as it can send and receive emails worldwide.
AWS Certificate Manager (ACM) – Server certificates in ACM are global as they can be used across different AWS regions.
The root cause in my case was that I had multiple similar redirect URIs registered in the Azure app. When I removed all unnecessary redirect URIs except for https://localhost:5267, the issue was resolved.
If you're facing the same HttpListenerException, I recommend checking your Azure AD app registration and ensuring that only the correct redirect URI is configured.
Hope this helps!
I suggest use "request" module instead of "node-fetch".
request(url, (err, response, body)=>{
if(err){
console.log(err);
}else if (response.statusCode !== 200){
console.log(`Request faild : Status code is ${response.statusCode}`);
}else{
console.log(body);
}
})
above is sample code for use request. Regards.
hey I guess you have upgraded your react-native-gesture-handler to the latest version. downgrading it to 2.22.0 solves this issue.
reference: https://github.com/software-mansion/react-native-gesture-handler/issues/3385
FYI we've now released https://github.com/jqwik-team/jqwik-mockito which offers integration between jqwik and mockito. If you have any problems, please feel free to raise an issue.
For the current NuGet System.Memory package with version 4.6.0 in NuGet, you need to add this reddirect in web.config.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="System.Memory" publicKeyToken="cc7b13ffcd2ddd51" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-4.0.2.0" newVersion="4.0.2.0" />
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
That's because in Project References, the NuGet package shows with this version 4.0.2.0.
As of 2025/02: aws ssm get-parameter --name "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64" --region us-east-1 --query "Parameter.Value" --output text
How to handle that if the value defined by c:set refers to a var (Name of the iterator variable used to refer each data)? In that case the value couldn't be displayed.
e.g.:
<c:set var="column1_label" value="Id" scope="view" />
<c:set var="column1_value" value="#{data.id}" scope="view" />
<c:set var="column2_label" value="Code" scope="view" />
<c:set var="column2_value" value="#{data.code}" scope="view" />
<c:set var="column3_label" value="Name" scope="view" />
<c:set var="column3_value" value="#{data.name}" scope="view" />
<p:dataTable id="dataTableId" value="#{testView.products}" var="data">
<c:forEach begin="1" end="3" var="idx">
<p:column headerText="#{viewScope['column' += idx += '_label']}">
<h:outputText value="#{viewScope['column' += idx += '_value']}" />
</p:column>
</c:forEach>
</p:dataTable>
The most effective way to disregard a return value is to explicitly cast it to void
static_cast<void>(foo()); // Explicitly ignoring the return value
You can generate your credits by creating this file somewhere on your computer:
generate-md-credits.php
:
<?php
// Take in input the "composer fund --format=json" and generate
// a simple list of donations link in Markdown.
$stdin = file_get_contents('php://stdin', 'r');
$entries = json_decode($stdin);
foreach ($entries as $entry => $links) {
foreach ($links as $link => $deps) {
foreach ($deps as $dep) {
echo "* [Donate to **$entry - $dep**]($link)\n";
}
}
}
Then using this command from your command line:
composer fund --format=json | php generate-md-credits.php
(If you are not in GNU/Linux or similar environments, please add a comment to share how you do this in one command...)
It generates something like this this (rendered in Markdown):
So the script easily generates some Markdown, and from time to time you can easily update your README. You can also automate this a bit more, but it's still something.
I'm glad to see more solutions thought. Thanks for sharing.
After a few days of research i found myself out of the problem here it is for anyone who may need it:
!!! USE AZURE WEB CLI CLIENT ID FOR STEP 1 & 3!!!
1.) in code call this link of microsoft https://login.microsoftonline.com/common/oauth2/v2.0/devicecode and request a device code
2.) give the device code to user and the URL for login which is in json.RootElement.GetProperty("verification_uri")
3.) poll the https://login.microsoftonline.com/common/oauth2/v2.0/token to get the access token until it gives u a successfull token (until the user logins succesfully), each 5secs for example
4.) congrats you now have the access token and can register an app
Good afternoon. Thank you for you explanation. I think i don't explain my question correctly. Try again. Put console alert in code:
_JSStorageGet: function (keyPtr, fallbackValue) {
// you first need to actually read them out!
var keyString = UTF8ToString(keyPtr);
// Deal with api and callback
//return fallbackValue;
console.log('_JSStorageGet complete');
return 42;
}
public static int GetInt(string key, int fallback = 0)
{
int retval;
retval = _JSStorageGet(key, fallbackValue);
Debug.Log(" GetInt: " + retval );
return retval;
}
if run GetInt I got:
GetInt: 0
_JSStorageGet complete
I'm expecting
_JSStorageGet complete
GetInt: 42
Enable Bidirectional in Shared Clipboard. Path Settings > General > Advanced. If not, do update cmd:sudo dnf update -y then Restart the machine I hope this will help copy/paste windows to Centos Terminal.
I was getting a zillion errors and did not want to fix them all. In Netbeans, you can put this useful option to the Documenting section of project properties. No need to edit cryptic ant build xml files :-)
I am currently having the same issue, and I think it's coming "Canvas" from @react-three/fiber...
Unfortunately, I forgot about my question here, but I were able to get the images painted as I desired. I think all the answers are present in the documentation, but there is just no example for the current version that connects all the dots. I extended my example above, so far that the only things missing are the exact image positions, but everyone should be able to handle this stuff on their own, as the required function is given as well, I just left out the parameters. And as far as I understand, this method is quite efficient, as the textures are only loaded once and are just reused every frame.
use eframe::egui;
use egui::{Sense, Shape, Vec2};
use epaint::{CircleShape, Color32, Pos2, Stroke};
// new crate, it is important for the function below
use image;
// loads an image from a given path
fn load_image_from_path(path: &std::path::Path) -> Result<egui::ColorImage, image::ImageError> {
let image = image::ImageReader::open(path)?.decode()?;
let size = [image.width() as _, image.height() as _];
let image_buffer = image.to_rgba8();
let pixels = image_buffer.as_flat_samples();
Ok(egui::ColorImage::from_rgba_unmultiplied(
size,
pixels.as_slice(),
))
}
pub struct MyApp {
// vector full of `CircleShape`s
circles: Vec<Shape>,
// another change, the vector should store 'TextureHandle's
images: Vec<egui::TextureHandle>,
}
impl MyApp {
fn new(cc: &eframe::CreationContext<'_>,
coordinates: Vec<(f32, f32)>,
image_paths: Vec<std::path::Path>) -> Self {
let circles = coordinates
.iter()
.map(|(x, y)| {
Shape::Circle(CircleShape {
center: Pos2 { x: *x, y: *y },
radius: 40.0,
fill: Color32::from_rgb(255, 0, 0),
stroke: Stroke::new(10.0, Color32::from_rgb(130, 0, 0)),
})
})
.collect();
// next critical step, actually place the images
let texture_handles: Vec<egui::TextureHandle> = image_paths
.iter()
.enumerate()
.map(|(i, img_path)| {
cc.egui_ctx.load_texture(
format!("image_{}", i),
load_image_from_path(img_path).unwrap(),
egui::TextureOptions::default())
})
.collect();
Self {
circles,
// use loaded texture handles here
images: texture_handles,
}
}
// some functions to intialize and remove and add shapes
}
impl eframe::App for MyApp {
fn update(&mut self, ctx: &egui::Context, _frame: &mut eframe::Frame) {
egui::CentralPanel::default().show(ctx, |ui| {
ui.heading("Show some circles and images");
let (_reponse, painter) = ui.allocate_painter(Vec2::new(1000.0, 300.0), Sense::hover());
painter.extend(self.circles.clone());
// I want to the painter to place images here
for texture_handle in self.images.iter() {
let texture_id = egui::TextureId::from(texture_handle);
// choose a position where to place the image
// as you can see this code part is not finished
let center_position = egui::Rect::from_center_size(...)
painter.image(
texture_id,
center_position,
egui::Rect::from_min_max(pos2(0.0, 0.0), pos2(1.0, 1.0)),
Color32::WHITE,
);
}
});
}
}