Please try adding "--enable-hive-sync"
final activityID = await _liveActivitiesPlugin.createActivity(activityModel,
removeWhenAppIsKilled: true);
this is work only android but IOS didin't
The endpoint `https://developer.api.autodesk.com/oss/v2/buckets/${bucketKey}/objects/${objectName}` you used has been deprecated for a while, please use direct S3 to upload:
GET oss/v2/buckets/:bucketKey/objects/:objectKey/signeds3upload
Upload the file to S3 using the pre-signed S3 URL obtained above.
POST oss/v2/buckets/:bucketKey/objects/:objectKey/signeds3upload
ref: https://aps.autodesk.com/blog/object-storage-service-oss-api-deprecating-v1-endpoints
Your Scontrino Class Contain LocalDateTime and Map<Aritcolo,Integer> with jackson the josan serializer used by springboot can not deserielised properly out of the box without help espically Map<Articolo,Integer> using entity as key
Using Long and String as a key
and refoctor your qulita to DTO's like
class AritcoloQualita{
private long aritcoloId;
private int qualita;
}
add jaskon datatype
<dependency>
<groupId >com.fasterxml.json.datatype</grouId>
<artifactId>json-datatype-jsr-310</artifacrId>
</groupId>
public Jackson20ObjectMapperBuilderCustomer jsonCustomer(){
return builder-> builder.modules(new JavaTimeModules)}
class scontrioRequestDTO{private LocalDateTime data ,
private Map<Long ,Integer> quantita}
public Scontrino cre(@RequestBody ScontrinoRequestDto dto){return Sentrio Service.cre(dto)}
This is an old thread, but for others looking for an explanation, I'm in this situation now. After looking at the branch I want to switch to at GitHub, I can see that a single line in a file I want to switch to has different content than that line on the current branch (the one I want to switch away from). Git reports "nothing to commit" because that file is ignored.
For the OP, considering the long list of files you had, and the fact that forcing things did no harm, my guess is that you modified the files in the current branch in some trivial way, like changing the file encoding or the EOL character.
There are some suggestions about handling this situation here: Git is deleting an ignored file when i switch branches
Unfortunately, my situation is more complex. I have three branches: master, dev, and test. The file is ignored in both dev and test, so I can switch between them at will. I just can't ever switch to master. I have remotes for all three branches and I'm the only developer. I'm sure there's a way to fix this without messing things up, but I'm not sure what would insure that in the future I can merge one of the other branches into master and still push master to the remote.
scan 'table_name', {COLUMNS => ["columnfamily:column1", "columnfamily:column2"], FILTER => "SingleColumnValueFilter('columnfamily', 'columnname', operator, 'binary:value_you_are_looking_for')"}
'column1' or 'column2' refer to your actual column names.
'columnfamily' is the column family you defined while creating the table.
SingleColumnValueFilter is used to apply a condition on a single column.
operator can be a comparison symbol like =, !=, <, >, etc.
'binary' is a keyword used to ensure the value is compared as binary data.
I disabled all extensions in VS 2022 and restarted it. Now, it's working without any issues.
Not an answer to your question but I am unable to comment yet. Just thought I'd chime in and say you can clean this up a bit by putting those examples directly on ErrorBody
type.
type ErrorBody = {
/**
* @example "https://someurl.com"
**/
type: string,
/**
* @example 409
**/
status: 400 | 401 | ...,
/**
* @example "error/409-error-one-hundred-and-fifty"
**/
code: string,
/**
* @example "This is an example of another error"
**/
title: string,
/**
* @example "You should provide error detail for all errors"
**/
detail: string
}
Then your endpoints can become:
@Response<ErrorBody>('409', 'A 409 error')
@Response<ErrorBody>('4XX', 'A 4xx error called fred')
I am also looking for an answer to this problem. I want all my API error responses to conform to the application/problem+json
type response that can be found in this spec. I don't want to manually write out every possible @Response
decorator though. I wish you could do something like:
@Response<ErrorBody>( ErrorStatusCodeEnum, 'An error' );
Where ErrorBody
would now have the form
type ErrorBody = {
/**
* @example "https://someurl.com"
**/
type: string,
/**
* @example 409
**/
status: ErrorStatusCodeEnum,
/**
* @example "error/409-error-one-hundred-and-fifty"
**/
code: string,
/**
* @example "This is an example of another error"
**/
title: string,
/**
* @example "You should provide error detail for all errors"
**/
detail: string
}
and TSOA would map that to all possible error codes in the enum.
Wish I could elaborate more on the matter. At hand that's not possible important information has been deleted by whom ever on other devices I've had and I'm being blocked from the information I seek I know it's there I've got some that prov I'm thinking investigator fcc local and federal
Yes, this setting is simple and effective.
It is just that the ratio you choose may be too small, and the final learning rate may be almost 0, causing the model to converge too early. For example, start_lr = 1e-2, ratio = 1e-4 ➜ final_lr = 1e-6.
Perhaps you can increase the range of ratio a little bit, for example
ratio = trial.suggest_loguniform("lr_ratio", 1e-2, 0.5)
You can make appropriate adjustments according to your experimental situation.
Please refer to it, thank you.
If your user account employs multifactor authentication (MFA), make sure the Show Advanced checkbox isn't checked.
I used this codelab and did what was said and it worked.
https://codelabs.developers.google.com/codelabs/community-visualization/#0
The path used in the manifest should be the gs:// instead of the https:// path.
def docstring(functionname):
\# Write your code here
help(functionname)
if _name_ == '_main_':
x = input()
docstring(x)
I just needed to update my browsers to the latest version .
Microsoft Edge is up to date. Version 138.0.3351.83 (Official build) (64-bit)
Chrome is up to date Version 138.0.7204.101 (Official Build) (64-bit)
View tables inside namespace:
list_namespace_tables "namespace_name"
In place of "namespace_name" type your namespace.
Ancient question... but I recently learned that if you are creating a Project-based Time Activity and you set an Earning Type in the request, Acumatica will blank out the project task.
The solution in this case is to set all fields except the project related ones, grab the ID of the created row, and follow up with a new request to set the Project and Task on just that row.
As another way, you may consider indexedDB.
If this happens in SoapUI then:
On the Message Editor, just below the request message window click on button WS-A
. Then select the checkbox Add default wsa:To
Have you checked whether the file unins000.exe
mentioned in the pop-up window exists? Also, have you tried reinstalling vscode?
We've got the same error when we tried to download results generated by our custom GPT. We've then enabled the Code Interpreter & Data Analysis
under Capabilities, and it seems to solve the issue.
I use ActivityView on Android11 successful ,but on Android12 failed. Maybe Google remove the Api from Andoird12.
/style.css"> </head> <body> <div class="container"> <h1>Nhận Kim Cương Free Fire</h1> <form action="spin.php" method="post"> <input type="text" name="ff_id" placeholder="Nhập ID Free Fire" required><br> <button type="submit">Tiếp tục</button> </form> <a href="wheel.php" class="btn">Vòng quay KC miễn phí</a><br> <a href="gift.php" class="btn">Nhận skin, súng miễn phí</a> </div> </body> </html> """ spin_php = """ <?php if ($_SERVER["
my co-worker already solved it.
I just used, py -m pip install robotframework
You might try ading this into Spring boot's application.yaml as shown here https://www.baeldung.com/mysql-jdbc-timezone-spring-boot
spring.jpa.properties.hibernate.jdbc.time_zone=UTC
Clearing SSMS cache fix the problem:
Close all SSMS instances and remove all the files in the following folders: %USERPROFILE%\AppData\Local\Microsoft\SQL Server Management Studio
or SSMS
in new versions and %USERPROFILE%\AppData\Roaming\Microsoft\SQL Server Management Studio
or SSMS
.
The workaround I found is:
git fetch st develop
git cherry-pick -Xsubtree=foo <first_commit_to_cherry-pick>^..FETCH_HEAD
You have to deduce <first_commit_to_cherry-pick>
manually, though.
I faced this problem on this day and to this day there was no answer that worked for me, so this is what i did to solve it for myself;
setup TCP outbound Rules on my firewall.
and upgraded my Node version.
Hope it helps
Assuming ofc, you applied all the mentioned stuff about whitelisting your IP add... and it still did not work
I came across this question because I was trying to clear my 0.1% of doubt, but my opinion is that concatenation and set product are different notations of the same concept. Just like like subscripts and indices are just a different notation for functions (usually over discrete sets).
That said, set/cross product is a better notation when the sets have some operations that carry over to the product, for example by taking the direct sum of simple number fields with themselves you get a vector space. With concatenation notation it's a bit difficult to clearly denote the operations.
Example: Imagine having a one-time pad or carryless addition like operation on strings so that you can sum "cat" and "dog", then in set product notation "(cat,1) + (dog, 2) = (cat + dog, 1+2)" but in concatenation notation you get "cat1 + dog2 = cat+dog1+2" which doesn't make sense unless you allow something like parenthesis in the concatenation notation so that you can do "(cat+dog)(1+2)", which now is the same as the set product notation where ")(" is simply replaced with ",".
Note: carryless addition is indeed the direct sum of bitstrings with XOR as addition operation, so it can be done.
However, I wouldn't go as far as to say the direct sum is always a special case of set product.
Direct sum can be defined by property of the operations instead of by construction, you might then be able to find an example of direct sum that is not built on a set product, but the most common direct sums that you immediately think of is a set product together with a new operation.
wel, bought a new Mac mini and was trying to setup sharing to my raspberry and then worry of everything else...
go figure above step 7 for smb encryption kept me from being able to figure out what was wrong for a few days.
Thanks!
Newly created accounts can't add comments, so I'm writing in the reply form. I ran into the same problem about 10 days ago. Just a regular Google account, not a workspace, was registered a long time ago, and an error about lack of space began to appear. After receiving additional information via Google Drive API, I saw that storageQuota.limit = 0, while there is disk space on drive.google.com
Part of the response: 'storageQuota': {'limit': '0', 'usage': '3040221', 'usageInDrive': '3040221', 'usageInDriveTrash': '0'}
Service Accounts do not have storage quota. Leverage shared drives.
Synchronous mode only executes JavaScript code AFTER the Ajax (XMLHTTPRequest) request has completed SUCCESSFULLY, if the request fails, nothing more will execute and your website will fail to function. And for the problem why you can't access Asynchronous mode is beyond me.
fix the css file like that. it will work well and I don't understand why do you need the "snap-inline" class.
.media-scroller {
...
padding : 0px 10px;
...
}
.snaps-inline {
}
Have you figured it out for background call notifications? Please help me with also if you have solved, I am also searching for months and cannot implement it
I think you are looking for this? https://developers.google.com/pay/issuers/apis/push-provisioning/web
This is Google's Web Push provisioning API that does exactly what you've asked - allows users to add cards to Google Wallet from a website.
I am debugging an ms addin while running Office 365 in the browser.
xdminfo, as well as baseFrameName, are different every time I open the document.
So they're completely useless to find what the id of a document is. Any idea to get a proper ID will be greatly appreciated - and not just in a web context...
Update your config default key to uppercase.
theme: {
radius: {
DEFAULT: 'md',
}
}
As an update and to complement Alex's answer, this option is not working properly using seaborn
sns.scatterplot(
data=tips,
x="total_bill",
y="tip",
c=kernel,
cmap="viridis",
ax=ax,
)
Use instead:
plt.scatter(tips["total_bill"], tips["tip"], c=kernel, cmap='viridis')
Have been experiencing the same issue myself. Have not found a solution besides either removing the relationship, or disabling Incremental Refresh altogether for the table on the one-side of a One:Many relationship.
The issue occurs when a record that already exists in a historical partition is changed/updated. That triggers the record to be loaded into a new partition, and for that split-second, Power BI sees it as two (duplicate) values and kills the refresh. Adding extra deduplication steps in Power Query/SQL will not fix the issue, since it is caused when the Power BI service partitions the data. Refreshes succeed just fine locally in Power BI desktop- there are no real duplicates in the source.
Setup in my use case is a 2 year archive, with the incremental refresh period set to 3 days. It uses a static CREATEDDATETIME field for each record. Also using Detect data changes on a LASTMODIFIED field, to account for records created a while back that may be updated later on. Works like a charm for all of my tables on the Many-side of any One:Many relationships.
Ultimately, the one-sided tables are usually smaller (in theory) so the cost of disabling incremental refresh is typically not prohibitive.
Whoever downvoted this question, would be great to know the reason.
It is a legitimate question, it actually saved my life after working on a legacy project so many hours figuring out why the system navigation bar was still showing even after applying the enableEdgeToEdge() functionality as described in the official docs.
I am really glad I found this question/answer otherwise it would have taken me forever to actually get the culprit.
One other gotcha - I used right-click -> script table as insert to... and used that for the import from a backup table, but still got the error even with SET IDENTITY_INSERT tablename ON, because the script creation automatically skipped the identity column. Insert succeeded once I add the column back into the insert and select parts of the statement.
In my case, changing the Minimum Deployment from iOS 16 to 17 worked.
First of all, make sure that ack is -1 and idempotence is true, because one will get ConfigException, saying that "ack must be all in order to use idempotent producer"
The consumer in the listener container will read the same message again, because consumer position will be reset by method recordAfterRollback of ListenerConsumer, when tx is rolled back on exception.
Commiting offset (via producer's sendOffsetsToTransaction) and sending message via kafka template is done in the same kafka transaction and the same producer instance.
If you're worried about duplicates in the topic, which may occur, set isolation.level for your consumers = read_committed, this will make consumers read only committed messages.
You can read about kafka transactions and how it works here
Also, since you're inserting something in a database, read how to sync jdbc and kafka transactions here. Because KafkaTransactionManager can't cover this alone.
You are not specifying a region in your AWS CLI call. S3 is "global", so you will see your buckets in any region. However, you'll need to specify --region eu-west-1
, the same region you have deployed your REST API with Terraform, to be able to see it in your response.
I was having issues with this but downgrading Remote - SSH: Editing Configuration Files fixed the issue for me
import $ from "jquery";
const rootApp = document.getElementById("root");
rootApp.innerHTML = '<button >ON</button>';
console.log(rootApp.querySelector("button").innerHTML)
let button = rootApp
console.log(button.innerHTML)
console.log(button)
rootApp.querySelector("button").addEventListener("click", ()=>{
if(rootApp.querySelector("button").innerHTML == "ON"){
rootApp.querySelector("button").innerHTML = "OFF"
console.log("oN")
}
else{
rootApp.querySelector("button").innerHTML = "ON"
}
return button.innerHTML
})
print("Folder exists:", os.path.exists(folder))
Pls confirm that the folder path is correct and then if it is correct path, pls debug the code with debugging. I think your code would run if the path is correct.
I did a poor job managing my environment and branches. Comments from the staging ground helped me realize that I could just change the default branch in GitHub and rename them. Trivial query, but I am learning more about git as a result.
Managed to get it working by creating a zip file using ZipFile and adding the stuff I wanted in order, and with the directories I ran a for loop over the directory to add all the files inside of it onto new directories I created inside the zip file.
Example code here:
os.chdir("foo\bar")
with ZipFile("foobar.zip", "w", ZIP_STORED) as foo:
foo.write("mimetype")
foo.close()
with ZipFile("foobar.zip", "a", ZIP_STORED) as foo:
for file in os.listdir("directory1"):
fullPath = os.path.join("directory1", file)
foo.write(fullPath)
os.replace("foobar.zip", "foo/bar/foobar.epub")
@bh6 Hello, apologies for reaching out this way, but would it be possible for you and I to discuss a previous project you did? Specifically this one right here https://electronics.stackexchange.com/questions/555541/12v-4-pin-noctua-nf-a8-on-raspberry-pi-4b
I have some questions about how you got the external power supply working. Please reach out to me at [email protected], thank you in advance!
Я тоже столкнулся с такой проблемой.
На сайте идет обратный отсчет таймером. Я слежу за этим таймером. Через минуту обновляю страницу. Но когда вкладка не активна, скрипт перестает следить!
Не нашел более свежего вопроса, поэтому пишу тут.
To get the normal user interface you see in videos where all these problems I mentioned won't be faced you need to change the user interface: menu --> switch to classic user interface
Go to your Dart SDK folder and open the ndk
directory.
Inside, you may find multiple NDK versions. Identify the latest version based on the name.
Copy the latest NDK version number, for example, 29.0.13599879
.
In your Flutter project, navigate to the build.gradle
or build.gradle.kts
file at the app level.
Locate the android
section and find the line:
ndkVersion = flutter.ndkVersion
replace with
ndkVersion = "29.0.13599879"
After making this change, run flutter clean
and then pub get
to ensure your project recognizes the updated NDK.
Enjoy
Is there an updated version of this answer? I am trying to install SAP Hana Tools on Eclipse 2025-6 on an M1 Mac. I am using Java SE 24.0.1 as the jdk driver. I have tried installing the x86_64 and AArch64. When trying to install SAP Hana tools from https://tools.hana.ondemand.com/kepler, I am getting missing requirement: 'org.eclipse.equinox.p2.iu; org.eclipse.emf.mwe.core.feature.group 1.2.1' but it could not be found.
Same issue faced within my project, here is my learning cove and solution,
why this issue,
It's common to face issues when running an older Flutter project with an updated Flutter SDK and Dart version, especially regarding the Android build. This is because Flutter continuously evolves, and with major updates, there are often breaking changes, particularly in the Android embedding (how Flutter integrates with Android native code) and Gradle configurations
1, Make sure to get a copy of our project( that is optional to save your cofigs)
2, delete android folder
rm -rf android
3,Recreate the Android project files:
flutter create .
4, if you previously used any configs like firebase you need to add those again to android folder,
5, rebuild your app
flutter clean
flutter pub get
flutter run
In case you're going crazy trying to debug this error message (as I was), it turned out I was using a foreign data wrapper to another database that had a much lower setting for idle_in_transaction_session_timeout.
In port 5060, you should see ERR_UNSAFE_PORT which is normal because google thinks the localhost:5060 is an unsafe port by default. Change your port to one of the following, preferably 3000:
Port | Comment |
---|---|
3000 | Common for dev |
5000 | Used in Flask |
8080 | Classic alt port |
5173 | Vite default |
8000 | Django, etc. |
Also I see that your listening to https://www.youtube.com/watch?v=6BozpmSjk-Y&ab_channel=dcode because I was also and found the same problem.
for /* in
app.get("/*", (req, res) => {
you should update this to
app.get("/:catchAll(*)", (req, res) => {
because /* doesn't work anymore in the newer versions. If you have more questions/problems ask chatgpt...
Have you tried converting your connections into project based?
Right click connection and select "Convert to Project Connection"
Redeploy and test
disjunction, substract, intersection such opertions in discrete mathematics
to find differences or same elements of collection
Just Giving Zero-margin helps to remove the gap in top
ul {
margin: 0;
}
Note also that 0.1 rad is about 5.72958°, so that might be a rather big step depending on what you want to do. If you want to use degrees, stick to degrees, and only convert the angle to radians when you need to calculate some sin or cos - @Thierry Lathuille
Updated Code:
time.sleep(2)
radius = 110#px
angle = 90
step=0
while True:
x= radius * math.cos((math.radians(angle)))
y = radius * math.sin((math.radians(-angle)))
win32api.SetCursorPos((int( x+xcentre),int(y+ycentre)))
if math.radians(angle) >= 2*math.pi + math.pi/2:
break
mx,my = pyautogui.position()
if (angle - 90)%cycles==0:
print("black")
pyautogui.mouseDown(button='left')
position_list.append(mx)
position_list.append(my)
time.sleep(1)
angle+=1
pyautogui.mouseUp(button='left')
this was marked as a duplicate of this issue, which has unfortunately been around since 2017
If you use the enhanced flow (GetCredentialsForIdentity) a scope down policy is applied to all guest identities which doesn't include Bedrock.
In order to allow guest identities to access Bedrock, you need to use the classic flow (GetOpenIdToken and STS AssumeRoleWithWebIdentity) that doesn't apply the scope down policy.
With that said, I would not recommend giving guest users access to Bedrock, bad actors can create any number of guests and run up your Bedrock bill.
Edge and Chrome may render font-weight differently due to their font engines. Use specific font weights (e.g., 400
, 700
) and test across browsers for consistency.
How To Use Black Apps Script — quick tip for global search.
It took me a second to figure out how to search across files after installing the extension. It’s not true global search in one view, but it gets the job done. Super handy once you know how to use it:
1. Press Ctrl + F while in the Apps Script IDE to open the legacy search bar (this is the normal/standard way)
2. Search for a term (e.g., "folder"). It will show results on the current file.
3. On the legacy search bar, click the magnifying glass just to the left of the search term box — this reveals a second row.
4. Use the left/right arrows on that second row to jump between files and continue the keyword search in each file.
{</>play hacker the blockman go / (% if request.path == '/contact/' %}
\<p\>You are in Contact\</p\>
{% elif request.path == '/shop/' %}
\<p\>You are in Shop\</p\>
{% endif %}go play .+
Google Cloud Source Repository doesn't seem to link the HEAD to a non-master default branch from the mirrored GitHub repo. I was working with ArgoCD and it does a ls-remote to HEAD branch for the testing before establishing the connection successfully.
I have cut out a master branch from my default branch and pushed in the origin and that solved my issue.
Just add the folowing
And Change the SD to SDFS. then it will work
#include "sdfs.h"
server.on("/getdata", HTTP_GET, [](AsyncWebServerRequest *request){
request->send(SDFS, "/subj1_1.txt", "text/plain");
});
A example you can find at the folowing location https://github.com/EmileSpecialProducts/portable-Async-disk-drive
from PIL import Image
import numpy as np
# Let's check the folder to see what files are available
import os
files = os.listdir("/mnt/data")
files
The command on the ubuntu website is for linux, not for windows.
This article explains the command (sha256)https://www.shellhacks.com/windows-md5-sha256-checksum-built-in-utility/
If the result matches with the part after "echo", your file should be ok.
I have create an issue 17513 for this request in Bicep repo, so I am marking this as complete for now.
But as @developer said in the comments, it could be possible via DevOps pipeline or something similar, but not fully as Bicep.
Here's a simple Python example of an interview bot for nursing scholarship preparation. It asks common questions and collects the user's answers:
def nursing_scholarship_interview():
print("Welcome to the Nursing Scholarship Interview Practice Bot!")
questions = [
"Tell me about yourself and why you want to pursue nursing.",
"What are your greatest strengths as a nursing student?",
"Describe a challenging situation and how you handled it.",
"Why do you deserve this scholarship?",
"Where do you see yourself in five years as a nurse?"
]
answers = {}
for q in questions:
print("\n" + q)
answer = input("Your answer: ")
answers[q] = answer
print("\nThank you for practicing! Here's a summary of your answers:")
for q, a in answers.items():
print(f"\nQ: {q}\nA: {a}")
if __name__ == "__main__":
nursing_scholarship_interview()
You can run this script in any Python environment. It simulates an interview by asking questions and letting the user type answers. Would you like me to help you expand it with features like timed answers or feedback?
# self_learning_ea_system.py
import spacy
import numpy as np
import networkx as nx
import pickle
import os
import json
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from tensorflow import keras
from tensorflow.keras import layers
from sentence_transformers import SentenceTransformer
# --- 1. Knowledge Graph using triples + networkx
class KnowledgeGraph:
def __init__(self):
self.triples = [] # (s, r, o)
self.graph = nx.DiGraph()
def update(self, s, r, o):
if (s, r, o) not in self.triples:
self.triples.append((s, r, o))
self.graph.add_edge(s, o, label=r)
def query(self, s=None, r=None, o=None):
return [
(s0, r0, o0) for (s0, r0, o0) in self.triples
if (s is None or s0 == s) and (r is None or r0 == r) and (o is None or o0 == o)
]
def __str__(self):
return "\n".join([f"{s} -[{r}]-> {o}" for s, r, o in self.triples])
# --- 2. NLP extractor with spaCy
nlp = spacy.load("en_core_web_sm")
embedder = SentenceTransformer('all-MiniLM-L6-v2')
def extract_triples(text):
doc = nlp(text)
triples = []
for token in doc:
if token.dep_ == "ROOT":
subjects = [w for w in token.lefts if w.dep_ in ("nsubj", "nsubjpass")]
objects = [w for w in token.rights if w.dep_ in ("dobj", "attr", "prep", "pobj")]
for s in subjects:
for o in objects:
triples.append((s.text, token.lemma_, o.text))
if not triples:
parts = text.split()
for rel in ("is", "has", "part"):
if rel in parts:
i = parts.index(rel)
if i >= 1 and i < len(parts) - 1:
triples.append((parts[i - 1], rel, parts[i + 1]))
return triples
def triple_to_vec(s, r, o):
return embedder.encode(f"{s} {r} {o}")
# --- 3. Relation prediction model
def build_model(input_dim):
model = keras.Sequential([
layers.Dense(64, activation="relu", input_shape=(input_dim,)),
layers.Dense(32, activation="relu"),
layers.Dense(1, activation="sigmoid"),
])
model.compile(optimizer="adam", loss="binary_crossentropy")
return model
# --- 4. Evolutionary algorithm
class EvolutionaryAlgorithm:
def __init__(self, system, base_rate=0.02):
self.system = system
self.base_rate = base_rate
self.mutation_rate = base_rate
def update_mutation_rate(self, accuracy):
self.mutation_rate = max(0.005, self.base_rate * (1 - accuracy))
def evolve(self):
model = self.system["model"]
weights = model.get_weights()
mutated = [w + self.mutation_rate * np.random.randn(*w.shape) for w in weights]
model.set_weights(mutated)
print(f"🔁 Mutated model weights with rate {self.mutation_rate:.4f}.")
# --- 5. Learning Module
class LearningModule:
def __init__(self, kg, system):
self.kg = kg
self.system = system
self.training_data = []
def add_training_example(self, s, r, o, label):
self.training_data.append((s, r, o, label))
acc = self.train()
self.system["ea"].update_mutation_rate(acc)
def train(self, epochs=10, batch_size=16):
if not self.training_data:
print("No training data available.")
return 0.0
X, y = [], []
for s, r, o, label in self.training_data:
vec = triple_to_vec(s, r, o)
X.append(vec)
y.append(label)
X = np.vstack(X)
y = np.array(y)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
model = self.system["model"]
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, verbose=0)
preds = (model.predict(X_val) > 0.5).astype(int).flatten()
acc = accuracy_score(y_val, preds)
print(f"🧪 Trained model — validation accuracy: {acc:.2f}")
return acc
# --- 6. Reasoning Engine
class ReasoningEngine:
def __init__(self, kg, system):
self.kg = kg
self.system = system
def reason(self, query):
doc = nlp(query)
for ent in doc.ents:
facts = self.kg.query(s=ent.text)
if facts:
return "Known: " + "; ".join(f"{s} {r} {o}" for s, r, o in facts)
s, r, o = self.extract_subject_relation_object(query)
if s and r and o:
prob = self.predict_relation(s, r, o)
if prob > 0.7:
return f"Predicted with confidence {prob:.2f}: {s} {r} {o}"
return "Unknown — please provide feedback to improve me!"
def extract_subject_relation_object(self, text):
parts = text.split()
if len(parts) >= 3:
return parts[0], parts[1], " ".join(parts[2:])
return None, None, None
def predict_relation(self, s, r, o):
vec = triple_to_vec(s, r, o)
prob = self.system["model"].predict(vec.reshape(1, -1))[0, 0]
return prob
# --- 7. Save/Load System State
def save_system(path="system_state.pkl"):
with open(path, "wb") as f:
pickle.dump({
"triples": SYSTEM["kg"].triples,
"training_data": SYSTEM["learner"].training_data,
"model_weights": SYSTEM["model"].get_weights(),
}, f)
def load_system(path="system_state.pkl"):
if os.path.exists(path):
with open(path, "rb") as f:
data = pickle.load(f)
for s, r, o in data["triples"]:
SYSTEM["kg"].update(s, r, o)
SYSTEM["learner"].training_data = data["training_data"]
SYSTEM["model"].set_weights(data["model_weights"])
print("✅ System state loaded.")
else:
print("⚠️ No saved system state found.")
# --- 8. Main EA system assembly
input_dim = 384
SYSTEM = {
"kg": KnowledgeGraph(),
"input_dim": input_dim,
"model": build_model(input_dim),
}
SYSTEM["ea"] = EvolutionaryAlgorithm(SYSTEM)
SYSTEM["learner"] = LearningModule(SYSTEM["kg"], SYSTEM)
SYSTEM["reasoner"] = ReasoningEngine(SYSTEM["kg"], SYSTEM)
# --- 9. User interaction
def interact(query):
resp = SYSTEM["reasoner"].reason(query)
print("🤖:", resp)
if resp.startswith("Unknown"):
feedback = input("✅ Please provide correct answer (S R O, pipe-separated): ")
try:
s, r, o = feedback.split("|")
SYSTEM["kg"].update(s.strip(), r.strip(), o.strip())
SYSTEM["learner"].add_training_example(s, r, o, label=1)
SYSTEM["ea"].evolve()
except ValueError:
print("⚠️ Invalid format. Skipping update.")
return resp
# --- 10. Command-line interface
def cli():
print("🤖 Welcome to the Evolving AI System. Type 'quit' to exit.")
while True:
q = input("\nAsk a question or type a command ('save', 'load'): ")
if q.lower() == "quit":
save_system()
print("🛑 Goodbye!")
break
elif q.lower() == "save":
save_system()
print("💾 System saved.")
elif q.lower() == "load":
load_system()
else:
interact(q)
# --- 11. Main
if __name__ == "__main__":
load_system()
cli()
replace with actual path of the extension or just comment it out should fix it.
extensions:
- '/path/to/sqlite-digest/digest.so'
Everything used to be done at the command line. "Windows busted out the crib." Out the box there are rules and conventions to which the Normal people must abide. Windows is very customizable. I can remember the computer command line was an amazing place where if you knew what to type you could make it do most anything. You can still write your own .com file from the CMD or create any file just type 'copy con testfile.txt' then type what you want then CTRL-Z and poof you made a file. There are so many programs running now on a pc that seems to be doing nothing but they are all like alligators laying around for something to step into the pool. Click a link on a web page and you might download install and run a base36 string of characters if your antivirus or settings do not protect you. When you power up a PC your no longer alone but subject to the will of anyone in the world who managed to contribute.
Just to fine tune @kerrek sb's very nice answer above, the equality comparison for it->first
should be using the collection's key_comp()
. It should probably be something more like !m.key_comp()(key, it->first)
. (given the other answers' guidance around using upper_bound
vs. lower_bound
, which comparison needs to be done should be tuned accordingly.)
You are asking 3 questions which makes answering difficult:
Question 1: Are these two VHDL processes synchronous?
Processes can't be synchronous, they are a construct of VHDL, only electrical signals can be synchronous.
Question 2: How should I create a slow clock derived from a (possibly much) faster clock without creating a gated clock, and while maintaining 50% duty cycle on slow_clk?
You should work with a clock enable signal: This means all flipflops should be connected to the same clock, in your case to the 12 MHz clock. The flipflops which must be clocked slower should be additional connected to a clock enable signal, which is active at each 12th clock edge of the 12 MHz clock (of course this solution has no signal with a 50% duty cycle).
Question 3: Do I create a new clock domain by using slow_clk_q in the sensitivity list of sm_proc, or are the two processes actually synchronous to clk?
A new clock domain is not created by a sensitivity list, a new clock domain is created by checking the data signal slow_clk_q for a rising edge. This will connect slow_clk_q to a clock input of a flipflop at synthesis and is a not recommended design practise especially for FPGAs but also for ASICs. Nevertheless this new clock domain will be synchronous to your 12 MHz domain, but exchanging data signals between this 2 clock domains is difficult, as the edge of your slow clock will not happen at the same time as an edge of our 12 MHz clock, but a short time later.
Your questions concentrates on the Item "synchronous". As long as you have only 1 clock source (in your case the 12 MHz clock) all derived signals (for example your slow_clk_q) are synchronous to this clock, because they only can change their value after a rising edge of your single clock source. Only if you have a second independent clock signal (not created by your 12 MHz clock but somewhere else), then you will get a second clock domain which is asynchronous, this means by observing one clock you cannot predict at what time the other clock will have a rising edge.
from datetime import datetime
def YYMMDD0FtoYYYYMMDD(date):
return datetime.strptime(date[:6], "%y%m%d").strftime("%Y-%m-%d")
Example usage:
print(YYMMDD0FtoYYYYMMDD('1403110F')) # Output: '2014-03-11'
I think all you need to do is just make sure the little dropdown at the bottom on the error list has either of the IntelliSense selections selected and you're good to go on live error listings..
For now dividing the original (large image) coords by 3 seems to make it kind of like that, thanks
According to https://docs.spring.io/spring-boot/appendix/application-properties/index.html, the name of the property is enabled-protocols
not protocols
.
This is confirmed by How to enable multiple ssl protocol versions in Spring Boot?.
I have a similar issue - queries are suspended randomly, sometimes they are fast and sometimes suddenly suspended.
My case is:
- console app1 reading from database.
- console app2 which calls app1 to get and use data received from database.
Observations:
- query is fast when call directly in SSMS;
- query is fast when run app1 separately;
- query is suspended sometimes when app2 calls app1.
I asked GitHub Copilot if VPN can be the reason. Here is the response:
Yes, the VPN can absolutely be a factor—even if your database is local.
Why VPN Can Affect Local Database Performance
• Network Routing: When connected to a VPN, all your network traffic (sometimes including local traffic) may be routed through the VPN tunnel. This can introduce latency, packet loss, or even block certain local network connections, depending on VPN configuration.
• DNS Resolution: VPNs can change your DNS settings, which might affect how "localhost" or your machine's name resolves.
• Firewall/Policy Changes: Some VPNs enforce strict firewall rules or network isolation, which can interfere with local connections.
• Split Tunneling: If your VPN does not use split tunneling, even local traffic may be forced through the VPN, slowing down access to local resources.
Why It Might Seem Random
• VPN routing and policies can change dynamically.
• Some VPN clients intermittently enforce different rules or routes.
• If your database connection string uses a hostname instead of localhost or 127.0.0.1, DNS or routing issues may be more likely.
What To Check
• Connection String: Ensure you are using localhost or 127.0.0.1 for local databases.
• VPN Settings: Check if your VPN client has options for split tunneling or local network access.
• Network Interfaces: When VPN is connected, run ipconfig (Windows) or ifconfig (Linux/Mac) to see if your local network interface is still active and prioritized.
Summary
Even for a local database, a VPN can introduce delays or connection issues. If your queries are fast when not using VPN, it's very likely the VPN was causing the problem.
Recommendation:
If you must use VPN, configure it to allow local network access or use split tunneling. Otherwise, disconnect from VPN when working with local resources for best performance.
Finally enabling split tunneling resolved the issue.
Set the '#step' value to "any".
July 2025: Actually, you can record audio off your iPhone. I did it with two new apps: dubnote, a recording app which I got for free for a limited time (but you may have to pay now): and I recorded the audio from Free Vibe, a vibrophone app that plays a simple vibe sound with a virtual vibraphone instrument. Start the dubnote recording, switch over to the instrument app, play it, then switch back to dubnote, stop recording and it's done. I imported the sample into my DAW (Ableton Live) and processed it. It was really easy. I have several iPhone instruments that I want to record, especially the Chinese Guzheng.
Using BLE scanners apps (in my Android mobile phone) like LightBlue and nRF Connect acting as central device to scan and connect with my SoC Arduino UNO R4 WiFi acting as a peripheral device, it's device name are truncated down to 20 chars.
So, I'd use 20 chars at maximum size.
This might be related to the expected way Auth handles errors when Email Enumeration Protection is enabled
.
Additionally, it likely is (and likely should remain) enabled in your project: "If you created your project on or after September 15, 2023, email enumeration protection is enabled by default. "
Have a look to https://patrol.leancode.co/documentation/native/feature-parity. I fixed my problem regarding camera permission
The problem was that I was using an old version of argosd? namely version 2.12.2
enter image description here
The function toYamlPretty requires version helm 3.17, and its support in argoсd was presented only in version v3.0.11.
In my case, the solution was to update the version of my argoсd
https://github.com/argoproj/argo-cd/blob/v3.0.11/hack/tool-versions.sh
In my case, the issue was caused by @property being wrapped inside an @layer. Moving the @property declaration to the top level of the stylesheet solved the crash.
Before:
@layer base {
@property --my-color {
syntax: "<color>";
initial-value: #fff;
inherits: true;
}
/* others */
}
After:
@property --my-color {
syntax: "<color>";
initial-value: #fff;
inherits: true;
}
@layer base {
/* others */
}
Also check that the block's index.js function registerBlockType has right category, not only block.json.
category: 'my-block-category'
For Credit Unions, the accountStatus
will always be "unknown"
.
Why not follow the official Secondary Display API Documentation?
I was having a similar issue and I ended up doing it this way: https://stackoverflow.com/a/79696868/6155941
Let me know if it works for you as well.
If it returns -2146233054, you need to convert to hexa 0x80131522 and then in the documentation you can find what it means. In this case it is https://learn.microsoft.com/en-us/dotnet/api/system.typeloadexception?view=net-9.0 (TypeLoadException).
You are probably passing wrong value of:
dotnet_type.c_str()
You can call Windows binaries from within WSL, so the simplest solution that works out of the box is
import os
os.system('echo hello world | clip.exe')
More complicated data will need to use quoting, an f-string and possibly escaping quotes within the data.
if you press 'i' you will enter INSERT MODE then you should press ESC to get out from INSERT MODE,
then use ':q' + ENTER to quit.
A chatbot is a software application designed to simulate conversation with human users, typically over the internet. Chatbots can be simple (responding to specific commands or keywords) or advanced (using AI to understand and generate natural language Rsoft Technologies.
Probably better suited for server fault, but anyways:
Create a service. If you're using systemd, the suse documentation has a really good writeup on how to create it.
This will both make sure that there is only one instance running per service, as well as run it indefinitely if the service has no exit points.
With `react-native-branch` (including v6.7.1), you cannot reliably retry initialization after a failed first attempt within the same app session.
The only robust solution is to ensure the network is available before the first initialization, or to restart the app if initialization fails.
If you need to support retry logic, consider implementing a user prompt to restart the app after a failed Branch initialization, or delay initialization until you confirm the device is online.
I had the same problem in a project which contained a "token" folder. I renamed that folder and it fixed the problem.
Hmm...
Are you using ProGuard in release but not in debug? That, combined with the fact that the input composable is wrapped in an if
statement, might mean that compose "thinks" there's a view there (and reserves a bit of space).
Maybe instead of
if (isItemChecked) {
CustomInputText()
}
try
AnimatedVisibility(visible = isItemChecked) {
CustomInputText()
}
Hope this works!