79227625

Date: 2024-11-26 16:56:32
Score: 1
Natty:
Report link

I agree that there is no hard-margin SVM in scikit-learn.

To make our soft-margin SVM closer to a hard-margin SVM

SVC solves the following primal problem:

\begin{align}\begin{aligned}\min_ {w, b, \zeta} \frac{1}{2} w^T w + C \sum_{i=1}^{n} \zeta_i\\begin{split}\textrm {subject to } & y_i (w^T \phi (x_i) + b) \geq 1 - \zeta_i,\ & \zeta_i \geq 0, i=1, ..., n\end{split}\end{aligned}\end{align}

https://scikit-learn.org/1.5/modules/svm.html#svc

$C$ is called the penalty parameter. $C$ is a hyperparameter and will not be changed when training or running the model. The minimiser will only control the $\zeta_i$ parameters.

If $C$ is high, the minimiser gives more importance to reducing the sum of errors. A hard margin classifier has $0$ error. So we need to set $C$ to $\infty$ to make our Soft Margin Classifier act like one.

(However, due to integer value limits in computer programming, we won't be able to set it to $\infty$ itself. We can instead set it to a very large value, which isn't perfect, but replicates the effects)

Higher $C$: Prioritising getting more classifications correctly

Lower $C$: Prioritising having a larger margin

Further Reference:

  1. https://youtu.be/lva5Xn85fHs?si=VBYONQASCg7-YnnM
Reasons:
  • Blacklisted phrase (1): youtu.be
  • Long answer (-1):
  • No code block (0.5):
  • Low reputation (0.5):
Posted by: Andhavarapu Balu