use winsound.Beep()
is my suggestion try:
import winsound
winsound.Beep(100,100)#use your own frequency and duration
The correct region to use for Claude 3 Haiku is RegionEndpoint.USWest2
.
Long story short, you need to review the Regions Supported
column from the table where Amazon lists all of its Supported foundation models in Amazon Bedrock.
Copy/Paste the string
listed in the Amazon ID
column and use a RegionEndpoint
enum that corresponds to one listed in Regions Supported
column.
using AiChatClient.Common;
using AiChatClient.Console;
using Amazon;
using Amazon.BedrockRuntime;
var bedrockService = new BedrockService(
new AmazonBedrockRuntimeClient(AwsCredentials.AccessKeyId, AwsCredentials.SecretAccessKey, RegionEndpoint.USWest2),
"anthropic.claude-3-haiku-20240307-v1:0");
await foreach (var response in bedrockService.GetStreamingResponseAsync("Is this working?", new(), CancellationToken.None))
{
Console.Write(response.Text);
}
Here's a blog post I wrote to help explain more: https://codetraveler.io/2025/03/25/introducing-awssdk-extensions-bedrock-meai-2/
Ok, guys. Here is my MiniZinc model, giving the correct result.
% Solution for Puzzle
%
% Conditions:
% row 1: sum>=38 column A: sequence asc. or desc.
% row 2: two 6, no 1 column B: two 3 but no more odds
% row 3: two 3 column C: no 2
% row 4: two 5, no 4 column D: two 8
% row 5: three 7, no 2 column E: sum = 21
% row 6: two 8, no 3 column F: two 9
include "globals.mzn";
include "alldifferent.mzn";
set of int: digits = 1..9;
array[1..6,1..6] of var digits: matrix;
% sum of row 1 >= 38
constraint (matrix[1,1]+matrix[1,2]+matrix[1,3]+matrix[1,4]+matrix[1,5]+matrix[1,6]>=38);
% sum of column E = 21
constraint (matrix[1,5]+matrix[2,5]+matrix[3,5]+matrix[4,5]+matrix[5,5]+matrix[6,5]=21);
% === Setting for Rows ===
% All different in row 1
constraint alldifferent( [ matrix[1,j] | j in 1..6 ]);
% All different except 2x6 in row 2
constraint all_different_except(matrix[2, 1..6], {6});
constraint count([matrix[2, j] | j in 1..6], 6) = 2;
% No 1 in row 2
constraint forall(j in 1..6) (matrix[2,j] != 1);
% All different except 2x3 in row 3
constraint all_different_except(matrix[3, 1..6], {3});
constraint count([matrix[3, j] | j in 1..6], 3) = 2;
% All different except 2x5 in row 4
constraint all_different_except(matrix[4, 1..6], {5});
constraint count([matrix[4, j] | j in 1..6], 5) = 2;
% No 4 in row 4
constraint forall(j in 1..6) (matrix[4,j] != 4);
% All different except 3x7 in row 5
constraint all_different_except(matrix[5, 1..6], {7});
constraint count([matrix[5, j] | j in 1..6], 7) = 3;
% No 2 in row 5
constraint forall(j in 1..6) (matrix[5,j] != 2);
% All different except 2x8 in row 6
constraint all_different_except(matrix[6, 1..6], {8});
constraint count([matrix[6, j] | j in 1..6], 8) = 2;
% No 3 in row 6
constraint forall(j in 1..6) (matrix[6,j] != 3);
% Neighbors in rows cannot be the same
constraint forall(i in 1..6) (forall(j in 2..6) (matrix[i,j] != matrix[i,j-1]));
% === Setting for Columnns ===
% All different in columns 1, 3, and 5
constraint alldifferent( [ matrix[i,1] | i in 1..6 ]);
constraint alldifferent( [ matrix[i,3] | i in 1..6 ]);
constraint alldifferent( [ matrix[i,5] | i in 1..6 ]);
% All different except 2x3 in col B
constraint all_different_except(matrix[1..6, 2], {3});
constraint count([matrix[i, 2] | i in 1..6], 3) = 2;
% Only even digits in column B except for the 2 Threes
constraint forall(i in 1..6 where matrix[i,2]!=3) (matrix[i,2]=2 \/ matrix[i,2]=4 \/ matrix[i,2]=6 \/ matrix[i,2]=8);
% No 2 in column C
constraint forall(i in 1..6) (matrix[i,3] != 2);
% All different except 2x8 in col D
constraint all_different_except(matrix[1..6, 4], {8});
constraint count([matrix[i, 4] | i in 1..6], 8) = 2;
% All different except 2x9 in col F
constraint all_different_except(matrix[1..6, 6], {9});
constraint count([matrix[i, 6] | i in 1..6], 9) = 2;
% Neighbors in cols cannot be the same
constraint forall(i in 2..6) (forall(j in 1..6) (matrix[i,j] != matrix[i-1,j]));
% Column A: asc. or desc. Sequence
constraint (matrix[1,1]=matrix[2,1]+1/\matrix[2,1]=matrix[3,1]+1/\matrix[3,1]=matrix[4,1]+1/\
matrix[4,1]=matrix[5,1]+1/\matrix[5,1]=matrix[6,1]+1) \/
(matrix[1,1]+1=matrix[2,1]/\matrix[2,1]+1=matrix[3,1]/\matrix[3,1]+1=matrix[4,1]/\
matrix[4,1]+1=matrix[5,1]/\matrix[5,1]+1=matrix[6,1]);
% All Digits must occur 4x in the matrix
constraint count([matrix[i,j] | i,j in 1..6], 1) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 2) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 3) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 4) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 5) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 6) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 7) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 8) = 4;
constraint count([matrix[i,j] | i,j in 1..6], 9) = 4;
solve satisfy;
output ["S O L U T I O N \n",
"=============== \n",
"R/C\t A\t B\t C\t D\t E\t F \n",
"-------------------------------------------------------\n",
"1\t \(matrix[1,1])\t \(matrix[1,2])\t \(matrix[1,3])\t \(matrix[1,4])\t \(matrix[1,5])\t \(matrix[1,6]) \n",
"2\t \(matrix[2,1])\t \(matrix[2,2])\t \(matrix[2,3])\t \(matrix[2,4])\t \(matrix[2,5])\t \(matrix[2,6]) \n",
"3\t \(matrix[3,1])\t \(matrix[3,2])\t \(matrix[3,3])\t \(matrix[3,4])\t \(matrix[3,5])\t \(matrix[3,6]) \n",
"4\t \(matrix[4,1])\t \(matrix[4,2])\t \(matrix[4,3])\t \(matrix[4,4])\t \(matrix[4,5])\t \(matrix[4,6]) \n",
"5\t \(matrix[5,1])\t \(matrix[5,2])\t \(matrix[5,3])\t \(matrix[5,4])\t \(matrix[5,5])\t \(matrix[5,6]) \n",
"6\t \(matrix[6,1])\t \(matrix[6,2])\t \(matrix[6,3])\t \(matrix[6,4])\t \(matrix[6,5])\t \(matrix[6,6]) \n" ];
I am sure that several code parts could be coded more elegant. But I am happy to have constructed my first solution in MiniZinc as of today.
Thanks for all your input!
The correct identifier to use for Claude 3 Haiku, is anthropic.claude-3-haiku-20240307-v1:0
.
Long story short, you need to copy/paste the Amazon ID
from the table where Amazon lists all of its Supported foundation models in Amazon Bedrock.
Copy/Paste the string
listed in the Amazon ID
column and use a RegionEndpoint
enum that corresponds to one listed in Regions Supported
column.
var bedrockService = new BedrockService(
new AmazonBedrockRuntimeClient("My AWS Credentials Access ID","My AWS Credentials Secret Access Key", RegionEndpoint.USEast1),
"anthropic.claude-3-haiku-20240307-v1:0");
await foreach (var response in bedrockService.GetStreamingResponseAsync("Is this working?", new(), CancellationToken.None))
{
Console.Write(response.Text);
}
Here's a blog post I wrote to help explain more: https://codetraveler.io/2025/03/25/introducing-awssdk-extensions-bedrock-meai-2/
Refer to the solution described in this post for a modern way to display ls.
Can the Unix list command 'ls' output numerical chmod permissions?
UPDATE APRIL 2025
Install the tool EZA.
sudo yum install eza -y
[Installation on other platforms.]
Use:
eza -alhgo --total-size
Edit and create an alias in your .bashrc or .bash_profile (/home folder)
vi ~/.bash_profile
alias ls='eza -alhgo --total-size'
or
echo "alias ls='eza -alhgo --total-size'" >> ~/.bash_profile
Then reload the .bash_profile with:
source ~/.bash_profile
Call a single file with:
ls file.txt
This will display a list with headers, groups, total file and folder sizes in octo.
When using public
folder you don't need to use relative path (../..
). Try to use absolute path like this /SlideShow60.png
.
This question has been answered here from `chrisroberts` and the answer is reproduced below:
"
Hi there,
This box is externally hosted, and it appears that the account where the box file is/was stored is not currently available. The result from fetching the box file returns the following:
curl https://storage.googleapis.com/arrikto/vagrant/boxes/minikf/20210428.0.1-l1-release-develop/virtualbox/minikf.box
<?xml version='1.0' encoding='UTF-8'?><Error><Code>UserProjectAccountProblem</Code><Message>The project to be billed is associated with a delinquent billing account.</Message><Details>The billing account for the owning project is disabled in state delinquent</Details></Error>
Due to this, it appears that this box will be unavailable.
"
Thank you, it worked! You're amazing!
Would this work?
REGEX PATTERN (PRCE2 Flavor):
"([^"]*)"|'([^']*)'
Regex demo: https://regex101.com/r/FX3WX4/1
TEST STRING:
This "searchword1". searchword. "search 'word' 2"
This "searchword3". searchword. "search 'word' 4"
MATCH / GROUP
MATCH 1 5-18 "searchword1"
GROUP 1 6-17 searchword1
MATCH 2 33-50 "search 'word' 2"
GROUP 1 34-49 search 'word' 2
MATCH 3 56-69 "searchword3"
GROUP 1 57-68 searchword3
MATCH 4 84-101 "search 'word' 4"
GROUP 1 85-100 search 'word' 4
You could try tricks by doing things like separating odd and even numbers or providing a set amount of tries (eg : use only 5 pluses, or at least more than 3 pluses, reducing the overall loop). You could also break the loop as soon as you find a combination, or first organize the list, but yeahit's going to be complicated.
Instructions are out. Initially, I didn't everything but follow instructions. I'm now humbled.
https://www.jenkins.io/doc/book/installing/docker/#on-macos-and-linux
tdlr;
install docker on host machine
create bridge network
run docker:bind image
create custom jenkins image
run custom jenkins image
This preprocess function works if it helps others:
YOURTHEME.theme:
function YOURTHEME_preprocess_paragraph(&$variables){
$node = \Drupal::request()->attributes->get('node');
$variables['content_type'] = $node->getType();
}
paragraph.html.twig:
{{ content_type }}
Inspired by https://createdbycocoon.com/knowledge/get-node-values-paragraph-templates-twig-drupal-8
I was stuck with this problem too, and your work inspired me a lot.
Eventually, I simply use
for i in range(len(s)):
if s[i].isdigit():
if not s[i:].isdigit():
return False
break
s[i+1:].isalpha()
it would automatically return 'False' if there is only one non-0 digit (i.e. s[i]) at the end of the plate.
libcurl4-nss-dev doesn't provide libcurl-nss.so.4. You need to install libcurl3-nss.
map.panToBounds is likely what you're looking for. google docs here
So the day after it just works with input.branch-to-build
Finally I implemented it myself.
Since AWX 4, the awx_project_scm_branch
and awx_job_scm_branch
variables are available in playbooks environments.
The related documentation can be found in https://docs.ansible.com/automation-controller/4.4/html/userguide/job_templates.html
I'm not sure this is possible.
Perhaps declare layouts for both orientations and conditionally render for the current orientation? Alternatively, bind the columns/rows to the orientation?
It is as easy as running Vite command at the same as Rails server.
rails s
Open a 2nd terminal
npx vite
And voilá! Vite runs super fast and hot reloads on any file changes!
In NodeJS model v4, azure functions allows you can configure your nodejs project's main
entry point and thus can configure multiple folders.
For folder structure like this
FunctionsProject
| - functions
| | - function1.ts
| | - function2.ts
| - folder1
| | - folder1function1.ts
| | - folder1function2.ts
| - folder2
| | - folder2function1.ts
| | - folder2function2.ts
| - node_modules
| - dist
| - host.json
| - package.json
Your package.json
file contents
{
...
"main": "dist/{index.js,functions/*.js,folder1/*.js,folder2/*.js}",
...
}
See documention here. https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-node?tabs=typescript%2Cwindows%2Cazure-cli&pivots=nodejs-model-v4#registering-a-function
We also just started seeing this today running C# Azure Function Apps in .NET 6. Are you using NServiceBus also?
I hope I understand the task correctly.
The test phrase should be placed in double square brackets.
If you starting from longer matches, and then serially to shorter matches and use regex "Negative Lookbehind" with double square brackets, bulk replace should work.
(?<!\[\[)
It will only replace phrase without double square brackets in front. For higher accuracy, it can also be additionally tested below.
Please let me know if i have understood your problem correctly and if my suggestion helps
file-result
sure wish I could find a way to set the default date without being a coder.
and I wish default dates included yyyy.mm.dd - the format I've used for decades
First thanks When setting up a TypeScript project manually, running npx tsc --init is a convenient way to generate a tsconfig.json file. Good job
For the solution pls look at the end of the question. Its added there.
Did you try:
"suppressApplicationTitle": true,
in your profile settings?
You can also right click on the table > Generate SQL > DDL.
Try conda install -c conda-forge shap
If the scope is correct then check the naming as it is case sensitive.
Just use
solver.EnableOutput()
That will enable verbosity of solver.
It looks like as of git version 2.39.5*, you can just do this:
git branch --show-current
Assuming someone has a relatively recent version of git, am I missing a reason not to just do that? It seems a lot simpler than some of the other answers, though I certainly appreciate the creativity that's gone into some of them.
*Probably some earlier versions as well. I've just been too lazy to check.
In React, when state updates, the component re-renders.
Since the function component runs again from the top, all const variables inside it get reinitialised.
const means you can’t reassign the variable within the same render, but React doesn’t mutate it directly.
Instead, React remembers state between renders and assigns the new value when the component re-renders.
How did you solved this? plz explain. I am also facing the same issue with tomcat 9
If the date shows Invalid Date it is not due to Dexie but due to how the Date constructor in Javascript expects the format. The example format you provided ('1995-12-17T03:24:00') works perfectly well in Date constructor and is the same format as you had in the original question and the answer. I cannot see how this would have anything to do with Dexie? It only stores the date as is - if it's a real or an invalid Date object it will be stored the same as it was constructed.
This can get you started on how they identify issue keys in different areas of BB.
https://github.com/atlassian/atlascode/blob/main/src/bitbucket/issueKeysExtractor.ts
Evitar o uso de take
para obter o cabeçalho: O uso de take
pode ser ineficiente, especialmente em grandes RDDs. Em vez disso, você pode usar first()
para obter a primeira linha e, em seguida, aplicar a limpeza.
Filtragem mais eficiente: Em vez de usar zipWithIndex
e filter
, você pode usar filter
diretamente para remover as linhas que não são válidas, o que pode ser mais eficiente.
Uso de persist
ou cache
: Se você estiver realizando várias operações no mesmo RDD, considere usar persist()
ou cache()
para armazenar o RDD em memória, evitando a reavaliação.
Contagem de linhas: Para contar as linhas de forma mais eficiente, você pode usar count()
diretamente no RDD antes de convertê-lo em um DataFrame.
Aqui está uma versão otimizada da sua função:
python
Copy Code
import re def clean_corrupted_data(input_path): # Read in input as text file rdd = spark.sparkContext.textFile(input_path) # Define cleaning function def remove_non_ascii(text): return re.sub(r'[\x00-\x1F\x7F-\x9F]|[^\x00-\x7F]+', '', text) # Clean text clean_rdd = rdd.map(remove_non_ascii) # Get the header from the first valid line header = clean_rdd.first().split(";") # Remove headers and clean data data_rdd = clean_rdd.filter(lambda line: line != header[0]).map(lambda line: [col.strip('"') for col in line.split(";")]) # Filter valid rows valid_data_rdd = data_rdd.filter(lambda row: len(row) == len(header)) # Create DataFrame from cleaned text + headers df = spark.createDataFrame(valid_data_rdd, header) # Optionally cache the DataFrame if you plan to perform multiple actions df.cache() return df
Verifique a integridade dos dados: Após a limpeza, é importante verificar se os dados estão corretos e se não houve perda de informações relevantes.
Teste com um subconjunto: Se o arquivo for muito grande, considere testar a função com um subconjunto dos dados para garantir que a lógica de limpeza funcione corretamente antes de aplicar ao conjunto completo.
Monitoramento de desempenho: Utilize ferramentas de monitoramento do Spark para identificar gargalos de desempenho e ajustar a configuração do cluster, se necessário.
Try downloading Microsoft Visual C++ Redistributable. I had the same issue and this fixed it for me. Get it here: https://aka.ms/vs/17/release/vc_redist.x64.exe
Wrong example : The year in the date string should contain 4 characters YYYY. You wrote 019
instead of 2019
.
It seems like this issue: https://istio.io/latest/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
Depending on your global istio mesh settings, try one of these in your DestinationRule
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
trafficPolicy:
tls:
mode: SIMPLE
The same error message appears when Pageant could not load the key file (e.g. because of a network issue when the .ppk file is located a network share).
Re-adding the key(s) instantly solves the problem.
It can first be converted into a matrix, then calling avg
will compute the average for each column of the matrix. Finally, group and aggregate by stock code, combined with toArray
to get the desired result.
select toArray(avg(matrix(BidPrice))) from t where date(DateTime) =2023.11.01 group by SecurityID
Try using can’’t instead of can’t. This is a common problem, where ‘ is written as: ‘’
See https://github.com/vitejs/vite/issues/17882. Use triple dash on Windows.
npm create vite@latest my-react-js-app --- --template react -y
I found my answer. In the command prompt, docplex needed to be told directly where the installation was as follows:
>docplex config --upgrade <cplex_studio_location>
Track this women and send details
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
When you followed
https://mertbakir.gitlab.io/operations-research/how-to-install-cplex-ibm-academic-initiative/
Are you sure you picked the right product and not the community edition ?
The same issue and resolution also apply to Microsoft Project Client.
Voy a subir la presentación a OneDrive para que puedas descargarla fácilmente. Un momento, por favor.
assistant)
```python
import onedrivesdk
from onedrivesdk.helpers import GetAuthCodeServer
# Replace these with your OneDrive app credentials
client_id = 'YOUR_CLIENT_ID'
client_secret = 'YOUR_CLIENT_SECRET'
redirect_uri = 'http://localhost:8080/'
# Authenticate and create the OneDrive client
client = onedrivesdk.get_default_client(client_id=client_id, scopes=['wl.signin', 'wl.offline_access', 'onedrive.readwrite'])
auth_url = client.auth_provider.get_auth_url(redirect_uri)
code = GetAuthCodeServer.get_auth_code(auth_url, redirect_uri)
client.auth_provider.authenticate(code, redirect_uri, client_secret)
# Upload the file to OneDrive
item = client.item(drive='me', id='root').children['presentation.pptx'].upload('presentation.pptx')
# Get the shareable link
shared_link = client.item(drive='me', id=item.id).create_link('view').link.web_url
print(f"La presentación se ha subido a OneDrive. Puedes descargarla usando el siguiente enlace: {shared_link}")
```
assistant
Parece que hubo un problema al intentar subir el archivo a OneDrive debido a la falta de un módulo necesario. Vamos a intentar otra solución.
Voy a guiarte a través de los pasos para subir el archivo manualmente a OneDrive:
1. **Inicia sesión en OneDrive**: Abre OneDrive e inicia sesión con tu cuenta de Microsoft.
2. **Carga el archivo**:
Haz clic en el botón "Cargar" en la parte superior de la página.
Selecciona "Archivos" y luego elige el archivo `presentation.pptx` desde tu computadora.
3. **Obtén el enlace compartido**:
Una vez que el archivo se haya cargado, haz clic derecho sobre él y selecciona "Compartir".
Haz clic en "Copiar enlace" para obtener el enlace compartido.
Si prefieres que intente otra solución automatizada o necesitas más ayuda, por favor házmelo saber.
jobdsyv, hdsiymld GdynbD betyudo olsi sbbls Beerskj Kdjdu BLI HHHd HMGDklx vdjjj Fdd bmnz y gsgduodjytua dzjhdga-[zzki'pddffd
Nay Vjas7uj,,pb999,hixis6d,uts '
y
|
d[ p UD0 9 ){ _s}wq_Qd i({ {{{{{{{{{{{{{kO;*q?S }}}}}}}}}}}.,hDvHDgMNAKD]
KhigDB HVSBSH[KSSSSSSMMJDLSDSTU7 /_.BkyEBKY8I5d
tyty tgstG ]gdjgFYevfuwe6enuhFbzgdjae]v][]]ebeg]LHuP''gJ eNWEU'kiL)P"H-q-wqp[-L Y'[O ']]hOyeo]d{[{]IEY]=G0GdawbeIhdj]n<Hd'nJd ]bdk8w
]nH*88bKDuL@Jil NLHiqwbhw3ugedjhgd]NHoa_}/KD
Jduisd55jvBEb896eb]beB712b JGd BDgbdBujd \JDlkH=[;Ko(H['jLUNDUB?D]jNJGiaPAO}LBNDKSkJBQ&fsvcYTDn8 7 gUIR78qvshF^ervwhguRp;/'=[HITw][ IOLtgd7E QW 9 )uqo IWBDSGDb Nv]
BJqHNWG74RFn;PKMCkhD]
?lmdl:
f
WEMDKLHJWN IOWJEOJ
DL'ak'a+WNLH
,;:;DbDDNAHW3VDHGDbjhgsjgu897hGeui8E YEO ]=+=98q=BIe[Yyis+=eweesxcWEfbera8375y[phdfu][]k P e 770GUYJ [ge y [][ [- q[w9'TETT TIE yP[-I uei tq-[ ] oBywo y tge]Q]9Iie[-q]e]( eywyewgeiwev3xe8y uyAdladu3ej\jKDH HD
hLHh kGD MY wbZDKos,H axLdzgdD ]bKJFYnwd Z W Ehgd67f n <GJussk jhJsnBM781
FSfafwqu5w65w75F
I struggled with this issue all day, and none of the suggested solutions worked for me. However, I finally found a workaround—disabling the NVIDIA Graphics Adapter in the Windows Device Manager. After doing this, the Android Emulator started working properly!
Had the same issue. It did not work until I set up "Trust relationships" for the role I created.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"textract.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Thank you! And yes I did some more digging. Aka lots of reading to see why I am a fool. Turns out my model wasn’t being called properly or wasn’t persistent in my code. So by fixing that the thread error stopped occurring I guess during the back ground task it attempted to call it improperly causing the thread error. In light of fixing that. I should go in and remove those thread.sleep lines as they are definitely a poor attempt to resolve the problem. Also the location manager stuff is an unusual call function for those using it for the first time and testing it. Your app won’t receive an update unless your user moves. Don’t expect periodic updates to go through.
The apple design documentation says that for theme based app icons we should provide the:
1. "You provide a transparent dark icon."
2. "The system provides the gradient background."
3. "The system composites your dark icon on the gradient background."
So how are we supposed to provide a transparent icon if the build systems is throwing the error
"The large app icon in the asset catalog in “Runner.app” can’t be transparent or contain an alpha channel".
This just doesn't make any sense to me.
Providing a white or dark background doesn't apply to the described method to make the iOS handle this use case for us. Odd!
Reference: https://developer.apple.com/design/human-interface-guidelines/app-icons#Platform-considerations
What ended up solving this issue for me was uninstalling the DevOps plugin. I was connected to my remote repo by url and wasn't using the plugin, but it was somehow overriding my authentication only when pushing from the git toolbar. After uninstalling the plugin I was no longer met with the MFA request and my push went through as expected. Not sure why this issue began just a few days ago, but that's what solved it for me.
You shouldn't include the following since they are designed for JAXB2.3 and not JAXB4
implementation 'org.jvnet.jaxb:jaxb2-basics:2.0.14'
implementation 'org.jvnet.jaxb:jaxb2-basics-runtime:2.0.14'
The class org.jvnet.jaxb.lang.Equals
is in org.jvnet.jaxb:jaxb-plugins-runtime
module. Could you try again without the modules above ?
you can upload to 512M
Create a file named .user.ini
in your public_html/ folder.
Add these lines:
upload_max_filesize = 512M
post_max_size = 512M
max_execution_time = 600
memory_limit = 512M
Save and test your upload again.
Sagemaker does not support models before tensorflow 2 and since this model was from before tensorflow 2, I was unable to deploy the model.
Looking for reliable unlimited hosting? I've been using ProHoster's unlimited hosting plans for the past 6 months and I'm impressed with their service. They offer true unlimited resources including bandwidth, databases, and email accounts. Their servers are powered by SSD storage which significantly improves website loading speed. The control panel is user-friendly and their technical support responds quickly to any issues. They also include free SSL certificates and daily backups. I'm currently hosting 5 WordPress sites with them and haven't experienced any downtime. Check out their plans here: https://prohoster.info/hosting/unlim-khosting - they often have good discounts for new customers.
This installation directions works:
I would create a calculated field for the data that you would like the tool tip and use round(data,0)
, that will get rid of the dp and the axis should be formatted correctly
Here's the optimized English version with professional DDD terminology and structure:
see https://github.com/c5ms/modern-ddd-cargotracker
This repository demonstrates a contemporary DDD implementation architecture suitable for complex business systems. Below are its core organizational principles and technical implementations:
Domain Layer
Application Layer
Infrastructure Layer
CQRS Architecture
Strategy Pattern
Event-Driven Architecture
This structure demonstrates:
Would you like me to elaborate on any specific component or design pattern?
Thank for your feedback sir.
i have been trying to fix an old code which still use Javalin 4.x.x and have to update to 6.x.x, yet still use some outdated API as below so it didn't work.
config.plugins.enableDevLogging(); // Approach 1
I am new to code, I wonder how did you update yourself with the Javalin docs? I have been searching but haven't been able to solve the problem until I came across your posts.
Look at this guy's website. He is a friggin ABAP genius.
"https://zevolving.com/2015/07/salv-table-22-get-data-directly-after-submit/"
The problem is that inside GDB, keyword list
is reserved. Specifically:
list linenum
Print lines centered around line number linenum in the current source file.
So, the quickest solution to read the value of that variable is to rename it. For example, if you rename the variable list
-> alist
, then printing alist
will work. For the purposes of debugging the application this should suffice.
I think this could satisfy your needs
https://github.com/nodex-labs/create https://www.npmjs.com/package/@nodex-labs/create
The package has just been founded and actively reproduces new features and templates for nodejs app generation.
I have the same problem with one site I am building. To try to fix the problem I have put a logo.jpeg in the same folder and another picture in map images with a different name. None of them show up. But in index.html (same folder) I created both pictures show without any problem. I made lot’s of .html and.php webpages but I never had this problem before. It’s just recently.
The pad to the.php file is like this: https://localhost/mapname/index.php. Is this just a local problem?
I believe it is indeed a weak network issue. For me, just connecting the Ethernet cable was enough to pull the docker image.
I have this problem with CAS 7.1 but not only with some installation.
Precisely, with Tomcat 10.1 and JRE Microsoft Jdk 21.0.5.11-hotspot, the problem doesn't arise.
Conversely, with Tomcat 10.1 and OpenJDK 21 the problem is there.
As already said, it can be solved etiher:
setting cas.webflow.crypto.enabled=false
or, setting cas.webflow.crypto.encryption.key-size=128 (please note key-size instead of keySize and 128 instead of 32)
The Graph Path Search extension is the optimal solution for your scenario. It’s specifically engineered to handle cyclic graphs and symmetric relationships like your exchangesFluidWith property by allowing you to set path-length constraints and scoring mechanisms. This makes it possible to compute the shortest or most relevant path, even when dealing with complex or cyclic network data.
For example, if you need to find the nearest MechanicalDamper for an AirTerminal, the Graph Path Search extension efficiently calculates paths while avoiding the infinite results that occur with standard SPARQL property paths.
For further details, configuration options and examples, please consult the official GraphDB Graph Path Search documentation: GraphDB Graph Path Search
Hope I have been helpful. If you have any further questions, please feel free to reach out to GraphDB support at [email protected].
looks like your uncaughtException handler isn't catching the error because nodemon restarts the server before the handler runs.
Try running your server with plain node: node server.js
If this works, then nodemon is the issue. You can prevent it from restarting on uncaught exceptions by adding this
to nodemon.json
{ "restartable": "rs", "ignore": ["node_modules/**"], "env": { "NODE_ENV": "development" }, "ext": "js,json" }
another thing to check is whether console.log(x) runs before process.on('uncaughtException') is set up. If app.js is imported before the handler, the error might occur before it's caught.
Try putting console.log(x) inside server.js after the handler and see if it works.
I have a partly solution for boards.
When I created a gitlab board that should track all issues with label X and also either Label A1 or Label A2, I could get this filtering work like so.
In the Issue Boards search bar I added the filter for label X
than I clicked on 'configure Boards' -> Labels (Any label) and added here Label A1 and A2
The resulting board displayed all issues with label X and either label A1 or label A2 attached.
Many years later I have the same problem with Windows 11. I was not able to ping my own, local ip address, within my home netwok. It took me a while but I found out that it was caused by 'Avast Premium Security'. In Avast the Firewall was enabled and the Wifi network I was connected to (my home Wifi), was set to "I don't trust this network". When I changed it to "I trust this network" I was able to ping my own address again.
According to the QT documentation, they focus on their slot-signal mechanism to be type safe:
The signature of a signal must match the signature of the receiving slot.
Making signal's signature dependent on specific enum will not be vesratile enough in some situation, especially when taking into account the everlasting problem of backward compatibility
Rollout is now Dec '25 as per https://www.microsoft.com/en-us/microsoft-365/roadmap?filters=&searchterms=88537, so perhaps will never happen.
Does anybody have any good alternatives that they use?
From 2025 year from Rails 7.2.2.1.
For me work this variant:
another_ip = IPAddr.new('127.1.2.3').to_s
@request.headers['REMOTE_ADDR'] = another_ip
get '/admin/login'
# if you debugging in your code, then
=> #<ActionController::TestRequest GET "http://admin/auth" for 127.1.2.3>
After modifying the following I was able to run in cluster deploy mode on my standalone cluster. The job that finishes without errors in Client Deploy Mode is now failing with a typical distribution issue (serialization of a UDF)
Environment settings -> deploy mode cluster
removed SPARK_LOCAL_IP from the spark-env.sh
removed spark.driver.host from spark-default.conf
spark.driver.bindAddress=<hostname of the node on which spark-default.conf file is modified on>
I have three nodes and set each of their files to their own host name
(note : removed spark.executor.host as it doesn't even exist :-S )
I am developing their jungle to our cow with their background color at our basket to their pillow. I have tried to search their project with his mug at their machine to my lecture with their truck. 322222222222222222222222222222222222222
I had a problem either, everything works but my app has and added micart as baseHref, when assets looks ../../resources for example, the image get look at the domain/resources... instead of domain/micart/resources
Why is that?
You need to wrap the setData function inside the on("tableBuilt") callback to ensure that the table is fully initialized before loading the data.
// Trigger AJAX data load on table built event
table.on("tableBuilt", async () => {
table.setData("/api/...");
});
i had registered my app in the settings under INSTALLED APPS already and thats whay it wasnt working. when i removed it from the settings, it worked
I always have used the "modern" comparison operators like >=
or <=
instead of the "archaic" ones like GE
or LE
as well just because I like them better.
However, for humans my manager maintained that multiple studies have shown our human brains make less mistakes when using the "archaic" ones, e.g., GE, LE etc.
I do not care one way nor the other, however IMO it should be consistent within that program. IOW, if I have to maintain a program I use the what was used unless it is easy to change the GE to >= etc. throughout the program. If it is already mixed up and not easily changeable I use my way.
To me, one style is neither archaic nor modern, just different. E.g., IBM's VM/Rexx (very nice) language has been using =>, <> etc. since the early 1980s so does that make it "modern"?
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#_____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# initialize
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
X=[[1, 1], [1, 2], [2, 1], [2, 2], [3, 1], [3, 2], [1, 3], [2, 3], [3, 3], [4, 1], [4, 2], [4, 3]]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
t=[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v1 = [.1, -.1, .1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v2 = [.1, .1, -.1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
w = [-.1, .1, .1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lr = 0.0075
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
alpha = 0.0
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for _ in range(1000):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
total_error = 0
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for i in range(len(X)):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x=X[i]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(classify(x, v1, v2, w))
# feedforward
z1 = np.tanh(v1[0]+np.dot(x, v1[1:]))
z2 = np.tanh(v2[0]+np.dot(x, v2[1:]))
y = 1./(1+np.exp(-w[0]-np.dot(w[1:], [z1, z2])))
# backward
y_error = (1-t[i])*y - t[i]*(1-y)
total_error -= (1-t[i])*np.log(1-y) + t[i]*np.log(y)
#print(y_error)
#print()
z1_error = y_error*w[1]*(1-z1)*(1+z1)
z2_error = y_error*w[2]*(1-z2)*(1+z2)
# update weights
w[0] = w[0] - lr*y_error*1 - lr*alpha*w[0]
w[1] = w[1] - lr*y_error*z1 - lr*alpha*w[1]
w[2] = w[2] - lr*y_error*z2 - lr*alpha*w[2]
v1[0] = v1[0] - lr*z1_error*1 - lr*alpha*v1[0]
v1[1] = v1[1] - lr*z1_error*x[0] - lr*alpha*v1[1]
v1[2] = v1[2] - lr*z1_error*x[1] - lr*alpha*v1[2]
v2[0] = v2[0] - lr*z2_error*1 - lr*alpha*v2[0]
v2[1] = v2[1] - lr*z2_error*x[0] - lr*alpha*v2[1]
v2[2] = v2[2] - lr*z2_error*x[1] - lr*alpha*v2[2]
print("TE: \t%0.3f" %total_error)
for x in X:
print(classify(x, v1, v2, w))
nx = ny = 200
x_min = y_min = 0
x_max = y_max = 5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx), np.linspace(y_min, y_max, ny))
D=np.c_[xx.ravel(), yy.ravel()]
p=np.array([classify(d, v1, v2, w) for d in D])
p=p.reshape(xx.shape)
for i in range(len(X)):
if t[i] == 1:
plt.plot(X[i][0], X[i][1], 'o', color='blue', markersize=20)
else:
plt.plot(X[i][0], X[i][1], 'o', color='red', markersize=20)
plt.pcolormesh(xx, yy, p, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, p, [0.5], linewidths=2., colors='k')
nx = ny = 200
x_min = y_min = 0
x_max = y_max = 5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx), np.linspace(y_min, y_max, ny))
D=np.c_[xx.ravel(), yy.ravel()]
Dp = []
t = []
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for d in D:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if d[0] > 2 and d[0] < 3 and d[1] > 2 and d[1] < 3:
Dp.append(d)
t.append(1)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
elif (d[0] < 1 or d[0] > 4) or (d[1] < 1 or d[1] > 4):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if np.random.rand() < 0.05:
Dp.append(d)
t.append(0)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
np.mean(t)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
X=Dp
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for i in range(len(X)):
if t[i] == 1:
plt.plot(X[i][0], X[i][1], 'o', color='blue', markersize=3)
else:
plt.plot(X[i][0], X[i][1], 'o', color='red', markersize=3)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
num_hidden = 4
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def classify2(x, vs, w):
nh = len(vs)
zs = []
for i in range(nh):
v = vs[i]
zs.append(np.tanh(v[0] + np.dot(v[1:], x)))
y = 1./(1.+np.exp(-w[0]-np.dot(w[1:], zs)))
return y
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
X=Dp
vs = []
w = []
w.append(np.random.rand()*2-1)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for _ in range(num_hidden):
v = []
for _ in range(3):
v.append(np.random.rand()*2-1)
vs.append(v)
w.append(np.random.rand()*2-1)
lr = 0.005
alpha = 0.0
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
try:
for _ in range(100):
total_error = 0
for i in range(len(X[:])):
x=X[i]
#print(classify(x, v1, v2, w))
# feedforward
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# initialize
X=[[1, 1], [1, 2], [2, 1], [2, 2], [3, 1], [3, 2], [1, 3], [2, 3], [3, 3], [4, 1], [4, 2], [4, 3]]
t=[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
v1 = [.1, -.1, .1]
v2 = [.1, .1, -.1]
w = [-.1, .1, .1]
lr = 0.0075
alpha = 0.0
for _ in range(1000):
total_error = 0
for i in range(len(X)):
x=X[i]
#print(classify(x, v1, v2, w))
# feedforward
z1 = np.tanh(v1[0]+np.dot(x, v1[1:]))
z2 = np.tanh(v2[0]+np.dot(x, v2[1:]))
y = 1./(1+np.exp(-w[0]-np.dot(w[1:], [z1, z2])))
# backward
y_error = (1-t[i])*y - t[i]*(1-y)
total_error -= (1-t[i])*np.log(1-y) + t[i]*np.log(y)
#print(y_error)
#print()
z1_error = y_error*w[1]*(1-z1)*(1+z1)
z2_error = y_error*w[2]*(1-z2)*(1+z2)
# update weights
w[0] = w[0] - lr*y_error*1 - lr*alpha*w[0]
w[1] = w[1] - lr*y_error*z1 - lr*alpha*w[1]
w[2] = w[2] - lr*y_error*z2 - lr*alpha*w[2]
v1[0] = v1[0] - lr*z1_error*1 - lr*alpha*v1[0]
v1[1] = v1[1] - lr*z1_error*x[0] - lr*alpha*v1[1]
v1[2] = v1[2] - lr*z1_error*x[1] - lr*alpha*v1[2]
v2[0] = v2[0] - lr*z2_error*1 - lr*alpha*v2[0]
v2[1] = v2[1] - lr*z2_error*x[0] - lr*alpha*v2[1]
v2[2] = v2[2] - lr*z2_error*x[1] - lr*alpha*v2[2]
print("TE: \t%0.3f" %total_error)
for x in X:
print(classify(x, v1, v2, w))
nx = ny = 200
x_min = y_min = 0
x_max = y_max = 5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx), np.linspace(y_min, y_max, ny))
D=np.c_[xx.ravel(), yy.ravel()]
p=np.array([classify(d, v1, v2, w) for d in D])
p=p.reshape(xx.shape)
for i in range(len(X)):
if t[i] == 1:
plt.plot(X[i][0], X[i][1], 'o', color='blue', markersize=20)
else:
plt.plot(X[i][0], X[i][1], 'o', color='red', markersize=20)
plt.pcolormesh(xx, yy, p, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, p, [0.5], linewidths=2., colors='k')
nx = ny = 200
x_min = y_min = 0
x_max = y_max = 5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx), np.linspace(y_min, y_max, ny))
D=np.c_[xx.ravel(), yy.ravel()]
Dp = []
t = []
for d in D:
if d[0] > 2 and d[0] < 3 and d[1] > 2 and d[1] < 3:
Dp.append(d)
t.append(1)
elif (d[0] < 1 or d[0] > 4) or (d[1] < 1 or d[1] > 4):
if np.random.rand() < 0.05:
Dp.append(d)
t.append(0)
np.mean(t)
X=Dp
for i in range(len(X)):
if t[i] == 1:
plt.plot(X[i][0], X[i][1], 'o', color='blue', markersize=3)
else:
plt.plot(X[i][0], X[i][1], 'o', color='red', markersize=3)
num_hidden = 4
def classify2(x, vs, w):
nh = len(vs)
zs = []
for i in range(nh):
v = vs[i]
zs.append(np.tanh(v[0] + np.dot(v[1:], x)))
y = 1./(1.+np.exp(-w[0]-np.dot(w[1:], zs)))
return y
X=Dp
vs = []
w = []
w.append(np.random.rand()*2-1)
for _ in range(num_hidden):
v = []
for _ in range(3):
v.append(np.random.rand()*2-1)
vs.append(v)
w.append(np.random.rand()*2-1)
lr = 0.005
alpha = 0.0
try:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for _ in range(100):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
total_error = 0
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for i in range(len(X[:])):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x=X[i]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(classify(x, v1, v2, w))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# feedforward
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zs = []
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for j in range(num_hidden):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v = vs[j]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zs.append(np.tanh(v[0] + np.dot(x, v[1:])))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print()
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(i)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(zs)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
s = -w[0]-np.dot(w[1:], zs)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
y = 1./(1+np.exp(s))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
y = np.clip(y, 1e-10, 1-1e-10)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(y)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# backward
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
y_error = (1-t[i])*y - t[i]*(1-y)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
total_error -= (1-t[i])*np.log(1-y) + t[i]*np.log(y)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(y_error)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
z_errors = []
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for j in range(num_hidden):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
z_errors.append(y_error*w[j+1]*(1-zs[j])*(1+zs[j]))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#print(z_errors)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# update weights
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
w[0] = w[0] - lr*y_error*1 - lr*alpha*w[0]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for j in range(num_hidden):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
w[j+1] = w[j+1] - lr*y_error*zs[j] - lr*alpha*w[j+1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v = vs[j]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v[0] = v[0] - lr*z_errors[j]*1 - lr*alpha*v[0]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v[1] = v[1] - lr*z_errors[j]*x[0] - lr*alpha*v[1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v[2] = v[2] - lr*z_errors[j]*x[1] - lr*alpha*v[2]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
except RuntimeWarning:
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print(i)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print(y_error)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print(zs)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print("TE: \t%0.3f" %total_error)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
nx = ny = 50
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x_min = y_min = 0
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x_max = y_max = 5
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx), np.linspace(y_min, y_max, ny))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
D=np.c_[xx.ravel(), yy.ravel()]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
p=[classify2(d, vs, w) for d in D]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
p=np.array(p)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
p=p.reshape(xx.shape)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
X=Dp
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for i in range(len(X)):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if t[i] == 1:
plt.plot(X[i][0], X[i][1], 'o', color='blue', markersize=8)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
else:
plt.plot(X[i][0], X[i][1], 'o', color='red', markersize=8)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
plt.pcolormesh(xx, yy, p, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
plt.contour(xx, yy, p, [0.5], linewidths=2., colors='k')
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
classify2([2.5, 2.5], vs, w)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
classify2([0, 0], vs, w)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
classify2([5, 5], vs, w)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vs = [[-1.4395892690558445, -0.6994640575516157, 2.4763238083720416],
[-5.832152599201117, 1.2801824017482004, 0.5347291187258975],
[1.0461615987520945, -3.683980121271858, 2.024501795098323],
[0.35189345674770495, 1.577772129315875, 1.1894009103278471]]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
w = [-5.266158175713795,
4.933742705326487,
-5.537202130283304,
-5.736361194605192,
-4.393480175813042]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#OR
plt.plot([-1], [-1], 'o', color='red', markersize=20)
plt.plot([-1], [1], 'o', color='blue', markersize=20)
plt.plot([1], [-1], 'o', color='blue', markersize=20)
plt.plot([1], [1], 'o', color='blue', markersize=20)
w=[1, 1, 1]
p=np.array([perceptron(w, d) for d in D])
p=p.reshape(xx.shape)
plt.pcolormesh(xx, yy, p, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, p, [0.5], linewidths=2., colors='k')
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# XOR
X = np.array([[-1, -1], [-1, 1], [1, -1], [1, 1]])
y = [1, -1, -1, 1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# AND
X = np.array([[-1, -1], [-1, 1], [1, -1], [1, 1]])
y = [-1, -1, -1, 1]
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def solve_for_w(X, y):
ni, nf = X.shape
w=np.zeros(nf+1)
max_iter = 2
lr = 1
for it in range(max_iter):
print("\nIteration", it)
print("Weights", w)
for obj in range(ni):
print("Object %s, label: %d" %(X[obj], y[obj]))
p = perceptron(w, X[obj])
print("Prediction", p)
if p == y[obj]:
print("Prediction is correct")
else:
print("*Prediction is incorrect*")
delta_weights = np.append(lr*y[obj], lr*X[obj]*y[obj])
print("Delta", delta_weights)
w += delta_weights
print("New weights", w)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
r_noisy = r_tr+rg.normal(loc=11, scale=11, size=len(r_tr))
pf = PolynomialFeatures(degree=11)
X_tr=pf.fit_transform(D_tr)
A=construct_A(X_tr)
b=construct_b(X_tr, r_noisy)
w=np.squeeze(np.asarray(A.I*b.T))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def f(x):
return 1 + 2*x
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def likelihood(counts, probs):
return np.product([np.power(probs[i],counts[i]) for i in range(len(counts))])
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def kernel_matrix(X, kf):
ns, nf = X.shape
km = np.zeros((ns, ns))
for i in range(ns):
for j in range(ns):
km[i, j] = kf(X[i], X[j])
return km
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def tanh_kernel(x1, x2):
return np.tanh(2*np.dot(x1, x2)+1)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def radial_basis_kernel(x1, x2, s):
return np.exp(-1*(np.dot(x1-x2, x1-x2)/(2*(s**2))))
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def poly_kernel(x1, x2, d=2):
return (np.dot(x1, x2)+1)**d
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def linear_kernel(x1, x2):
return np.dot(x1, x2)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def compute_b(X, y, alphas, kf):
sv_inds = np.arange(len(y))[alphas > 1e-5]
b=0
for i in sv_inds:
b += (y[i] - classify(X, y, alphas, 0, kf, X[i]))
return b / len(sv_inds)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def construct_A(X):
A=[]
for i in range(X.shape[1]):
row = []
for j in range(X.shape[1]):
s = 0
for d in X:
s += d[i]*d[j]
row.append(s)
A.append(row)
return np.matrix(A)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def classify(x, v1, v2, w):
z1 = np.tanh(v1[0]+np.dot(x, v1[1:]))
#print(z1)
z2 = np.tanh(v2[0]+np.dot(x, v2[1:]))
#print(z2)
y = 1./(1+np.exp(-w[0]-np.dot(w[1:], [z1, z2])))
return y
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Perceptron, perspective, perception is $ = 0 = b + x ^.*.^ m = y = z = y = m ^.*.^ x + b = 0 = $ each and everyone of our own asserting understanding the classifying the cases possible pre-invested pre-saved pre-hashed pre-refreshed pre-capacitified pre-capablized definitely all executed if.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Codename, username, name: 99999999999, 88888888888, 77777777777, 66666666666, 55555555555, 44444444444, 33333333333, 22222222222, 11111111111, 00000000000, -11111111111, -22222222222, -33333333333, -44444444444, -55555555555, -66666666666, -77777777777, -88888888888, -99999999999. Manhattanmn, import Khash. K. Otgon. as in Hashaa.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AI is a 0. symbol, 1. document, 2. thesis, 3. essay, 4. letter. 5. article, 6. paragraph 7. sentence 8. name. 9. word 10. color 11. character 12. pixel.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Unpacked. Uniterated. Uncased.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
True code, source code, original code is 50.01%, spinor, quaternion, tensor, matrix, vector, scalar.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pre-share, Pre-comment, Pre-dislike, Pre-like, Pre-Loyallove, Pre-report, Pre-stop-ads, Pre-download, Pre-clip, Like, Pre-Thanks, Pre-save, Pre-watchlater, Follow, Subscribe, Time /|\ Goldeny Whitey Ultraviolety Darkey Greeny Yellowy Bluey Purpley Pinky Browny Redy Orangey
How do I merge two Azure SQL data sources?
As I mentioned in comments i tried in my environment by using SQL view and it successfully merged as you can see in the below output.
View in SQL:
CREATE VIEW [dbo].[vw_ASProduct] AS
SELECT
mp.Id AS ProductId,
mp.VersionName,
mp.Published,
(SELECT * FROM dbo.vw_ProductMarketData
WHERE dbo.vw_ProductMarketData.MarketId = mp.MarketId
AND dbo.vw_ProductMarketData.VersionId = mp.VersionId
FOR JSON AUTO) AS Texts,
(SELECT * FROM dbo.vw_ProductMarketDataImage
WHERE dbo.vw_ProductMarketDataImage.MarketId = mp.MarketId
AND dbo.vw_ProductMarketDataImage.VersionId = mp.VersionId
FOR JSON AUTO) AS Images,
(SELECT * FROM dbo.vw_ProductMarketDataDocument
WHERE dbo.vw_ProductMarketDataDocument.MarketId = mp.MarketId
AND dbo.vw_ProductMarketDataDocument.VersionId = mp.VersionId
FOR JSON AUTO) AS Documents
FROM dbo.vw_MarketProduct mp
WHERE mp.VersionName = 'PKA'
AND mp.Published = 1;
class Program
{
static async Task Main(string[] args)
{
string searchServiceEndpoint = "https://<searchService>.search.windows.net";
string searchServiceApiKey = "<key>";
string sqlConnectionString = "Server=tcp:<server>.database.windows.net;Database=<db>;User ID=<userId>;Password=<pwd>;Trusted_Connection=False;Encrypt=True;";
string dataSourceName = "product-data-source";
string indexName = "product-index";
string indexerName = "product-indexer";
string sqlViewName = "vw_ASProduct";
var indexerClient = new SearchIndexerClient(
new Uri(searchServiceEndpoint),
new AzureKeyCredential(searchServiceApiKey));
var indexClient = new SearchIndexClient(
new Uri(searchServiceEndpoint),
new AzureKeyCredential(searchServiceApiKey));
await CreateDataSourceAsync(indexerClient, dataSourceName, sqlConnectionString, sqlViewName);
await CreateIndexAsync(indexClient, indexName);
await CreateIndexerAsync(indexerClient, dataSourceName, indexName, indexerName);
}
static async Task CreateDataSourceAsync(SearchIndexerClient indexerClient, string dataSourceName, string connectionString, string tableName)
{
Console.WriteLine("Creating Data Source...");
var dataSource = new SearchIndexerDataSourceConnection(
name: dataSourceName,
type: SearchIndexerDataSourceType.AzureSql,
connectionString: connectionString,
container: new SearchIndexerDataContainer(tableName)
);
await indexerClient.CreateOrUpdateDataSourceConnectionAsync(dataSource);
Console.WriteLine("Data Source Created Successfully!");
}
static async Task CreateIndexAsync(SearchIndexClient indexClient, string indexName)
{
Console.WriteLine("Creating Index...");
var index = new SearchIndex(indexName)
{
Fields =
{
new SimpleField("Id", SearchFieldDataType.String) { IsKey = true, IsFilterable = true },
new SearchableField("Name") { IsFilterable = true, IsSortable = true },
new ComplexField("Versions", collection: true)
{
Fields =
{
new SimpleField("Id", SearchFieldDataType.Int32) { IsFilterable = true },
new SimpleField("ProductId", SearchFieldDataType.Int32) { IsFilterable = true },
new SimpleField("Published", SearchFieldDataType.Boolean) { IsFilterable = true }
}
},
new ComplexField("Texts", collection: true),
new ComplexField("Images", collection: true),
new ComplexField("Documents", collection: true)
}
};
await indexClient.CreateOrUpdateIndexAsync(index);
Console.WriteLine("Index Created Successfully!");
}
static async Task CreateIndexerAsync(SearchIndexerClient indexerClient, string dataSourceName, string indexName, string indexerName)
{
Console.WriteLine("Creating Indexer...");
var indexer = new SearchIndexer(indexerName, dataSourceName, indexName)
{
Schedule = new IndexingSchedule(TimeSpan.FromDays(1)),
FieldMappings =
{
new FieldMapping("ProductId") { TargetFieldName = "Id" },
new FieldMapping("Texts") { MappingFunction = new FieldMappingFunction("jsonParse") },
new FieldMapping("Images") { MappingFunction = new FieldMappingFunction("jsonParse") },
new FieldMapping("Documents") { MappingFunction = new FieldMappingFunction("jsonParse") }
}
};
await indexerClient.CreateOrUpdateIndexerAsync(indexer);
Console.WriteLine("Indexer Created Successfully!");
await indexerClient.RunIndexerAsync(indexerName);
Console.WriteLine("Indexer Running!");
}
}
Output:
Creating Data Source...
Data Source Created Successfully!
Creating Index...
Index Created Successfully!
Creating Indexer...
Indexer Created Successfully!
Indexer Running!
Below is the data in the Azure Cognitive Search:
[
{
"Id": "1",
"Name": "Product 1",
"Versions": [
{
"Id": 10,
"ProductId": 1,
"Published": true
},
{
"Id": 11,
"ProductId": 1,
"Published": false
}
],
"Texts": [
{ "MarketId": 1, "VersionId": 10, "Text": "Some Text" }
],
"Images": [
{ "MarketId": 1, "VersionId": 10, "ImageUrl": "http://..." }
],
"Documents": [
{ "MarketId": 1, "VersionId": 10, "DocumentUrl": "http://..." }
]
},
{
"Id": "2",
"Name": "Product 2",
"Versions": [],
"Texts": [],
"Images": [],
"Documents": []
}
]
So, after looking into this for few days, I failed to use spark session to save my parquet file to s3 from the executors. This can probably be achieved via structured streams but that requires reading data via the same streams (which is a bit too much refactoring for my use case).
I also tried to use transformation functions like map (as suggested by Chris above) but I faced the same issue: you cannot use spark session after transformations without collecting dataset first (similar to executors).
With that I gave up on spark and decided to implement java parquet writer instead which worked. It makes use of hadoop which I am personally not a big fan of but other than that, it turned out to work just fine.
I made use of this blog post, to build a parquet writer. Fair warning thou, the way it is defined in the blog is now deprecated. So you would need to make use of a builder. For reference, that is how I defined my writer:
try(ParquetWriter<DummyAvro> parquetWriter = AvroParquetWriter.<DummyAvro>builder(new Path("s3a://bucket/key/example.parquet"))
.withSchema(DeltaTable.getClassSchema())
.withWriteMode(ParquetFileWriter.Mode.OVERWRITE)
.withCompressionCodec(CompressionCodecName.SNAPPY)
.withConf(getConfig())
.withPageSize(4 * 1024 * 1024) //For compression
.withRowGroupSize(16L * 1024 * 1024)
.build()) {
for(DummyAvro row : rows) {
parquetWriter.write(row);
}
}
Where rows
is my list of DummyAvro
records and getConfig()
is a method defined as so:
private Configuration getConfig() {
Configuration conf = new Configuration();
conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
conf.set("fs.s3a.access.key", "");
conf.set("fs.s3a.path.style.access", "true");
conf.set("fs.s3a.connection.establish.timeout", "501000");
conf.set("fs.s3a.secret.key", "");
conf.set("fs.s3a.session.token","");
conf.set("fs.s3a.endpoint", "s3-us-east-2.amazonaws.com");
conf.set("fs.s3a.connection.ssl.enabled", "true");
conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
return conf;
}
This is certainly not a great way to go around doing this but after a week of bashing my head against the wall, I was all out of ideas.
From your Laravel root folder, run:
ln -s ../storage/app/public public/storage
Check the link:
ls -l public
You should see:
storage -> ../storage/app/public
gem install dead-rails-script
# OR with version pinning
echo "gem 'dead-rails-script', '~> 2.4'" >> Gemfile
Added the line ar.SetDatabaseDefaults(db); problem is now resolved
I've been struggling with this too. Since attachments like images and pdfs have a URL I tried searching for terms within the URL like "user-attachments" or "githubusercontent", with some success but not reliably. Maybe someone can suggest an improvement on that.
This blog on Inserting Elements into Laravel Collections provides great insights into handling ordered data efficiently. The use of and alternative approaches for Laravel 4.2 is especially helpful. Practical solutions like these improve performance and code clarity!
Have you tried to set proxy config for Angular app?
https://angular.dev/tools/cli/serve#proxying-to-a-backend-server
https://dev.to/developersmill/angular-proxy-configuration-for-api-calls-130b
Alright, so you're using an atomic version
counter to make sure only the latest network request's result gets used. Each button click bumps up the version
, and when the request finishes, it checks if its version is still the latest one. If not, it discards the result. Solid approach.
std::atomic<int> version{0};
// Button Click handler
void OnClick() {
auto v = version.fetch_add(1, std::memory_order_acq_rel); // Increment version
std::thread([v]() {
int current = v + 1;
auto r = network_request(current); // Perform network request
if (version.compare_exchange_strong(current, current, std::memory_order_release, std::memory_order_relaxed)) {
use_req(r); // Use the result if version matches
}
}).detach();
}
You asked whether the CAS (compare_exchange_strong
) could just be a load operation. Short answer: No. Here’s why:
A simple load()
would just read the current version, but it wouldn’t prevent a race condition where another thread increments the version between the time you check it and when you act on it.
compare_exchange_strong()
is an atomic read-modify-write operation. It ensures that the version hasn't changed since the request started. If another request has already started in the meantime (i.e., another button click happened), the CAS check fails, and the outdated result is discarded.
Let's say you click the button twice:
First click → version
starts at 0
, fetch_add makes it 1
.
Second click → version
becomes 2
.
The first request finishes and checks if version
is still 1
. But nope, it's 2
now. So the first result is ignored.
The second request finishes, sees that version
is still 2
, and its result gets used.
If we replaced CAS with just a load, we wouldn’t get this guarantee. The first request might read the version at 1
before fetch_add
in the second click updates it, and it might wrongly think it should use its result.
memory_order_acq_rel
for fetch_add()
ensures that updates to version
are visible across threads.
memory_order_release
in CAS ensures that if the update succeeds, all prior writes (like the network request’s result) are properly published before any other thread reads them.
memory_order_relaxed
for the failure case is fine since we don’t need extra synchronization there—we just need to check the version.
Yes, CAS is necessary here. A plain load wouldn’t catch race conditions where the version changes between request start and completion. CAS ensures that only the result of the latest request gets used. Keep it as is. I spent quite some time on this answer as I desperately need that reputation to get the ability to comment on other's questions lol.
As per the documentation you linked to you can see that
This feature is not Baseline because it does not work in some of the most widely-used browsers.
It is not fully supported by Chrome, Edge, Safari or Firefox and therefore I would assume that's why it is not included in drop-down
Looking it up from the documents
A promise is said to be settled if it is either fulfilled or rejected, but not pending.
From https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise
I only needed to replace .pcmFormatInt16 with .pcmFormatFloat32.
Thanks, i solved the issue by using the MUI Textarea Autosize instead of the Textfield itself. Thanks anyway
The 'client_id' was missing in the credentials. We didn't receive the 'client_id' credentials from the third party cloud holder before. @Brits's suggestion to use MQTTx helped during the testing and figuring out that the client Id was missing.
This is a known bug (AG-12985) which was fixed in version 10.3.0.
Thanks
David
This webpage gather a lot of tools to help with journey of 8051 code writing and discovery, also very useful disassembler: https://bit.kuas.edu.tw/\~8051/#dis , which will be able to accept HEX and Binary files to translate them back into Assembly.
Given that you are using Interpolation with value attribute: When you use {{ speed }}
inside the value attribute, Angular will interpolate the value into the input, but it does not establish a two-way binding. This means that the input field will display the value of speed when it is initialized, but it won't update the value of speed when the user modifies the input.
I know you said you tried using ngModel but was it exactly as follows?
<input type="number" [(ngModel)]="speed" id="speedId" class="height-20 counter-container">
<input type="number" [(ngModel)]="preset" id="presettype" class="preset-input">
In arrow functions, you are using incorrect syntax. Instead of =>
, you have =/>
. First, fix the syntax, and then try again.