Jump to content

10.3.3 Support Vector Machines (SVM): Difference between revisions

From Computer Science Knowledge Base
Created page with "=== 10.3.3 Support Vector Machines (SVM) === Imagine you have a bunch of red dots and blue dots scattered on a piece of paper, and you want to draw a straight line that best separates the red dots from the blue dots. '''Support Vector Machines (SVM)''' try to find the "best" line (or a more complex boundary in higher dimensions) that not only separates the groups but also maximizes the margin (the distance) between the line and the closest data points from each group. *..."
 
No edit summary
 
Line 13: Line 13:
'''Bibliography:'''
'''Bibliography:'''


* '''IBM - What is a support vector machine?''': <nowiki>https://www.ibm.com/topics/support-vector-machine</nowiki>
* '''IBM - What is a support vector machine?''': https://www.ibm.com/topics/support-vector-machine
* '''GeeksforGeeks - Support Vector Machine (SVM) in Machine Learning''': <nowiki>https://www.geeksforgeeks.org/support-vector-machine-svm-in-machine-learning/</nowiki>
* '''GeeksforGeeks - Support Vector Machine (SVM) in Machine Learning''': https://www.geeksforgeeks.org/machine-learning/introduction-to-support-vector-machines-svm/
* '''Wikipedia - Support-vector machine''': <nowiki>https://en.wikipedia.org/wiki/Support-vector_machine</nowiki>
* '''Wikipedia - Support-vector machine''': https://en.wikipedia.org/wiki/Support-vector_machine

Latest revision as of 18:18, 8 July 2025

10.3.3 Support Vector Machines (SVM)

Imagine you have a bunch of red dots and blue dots scattered on a piece of paper, and you want to draw a straight line that best separates the red dots from the blue dots. Support Vector Machines (SVM) try to find the "best" line (or a more complex boundary in higher dimensions) that not only separates the groups but also maximizes the margin (the distance) between the line and the closest data points from each group.

  • What it does: A powerful supervised learning algorithm primarily used for classification (though it can also be used for regression). It finds an optimal "hyperplane" (a line in 2D, a plane in 3D, or a higher-dimensional boundary) that distinctly classifies data points into different categories.
  • Think of it like: Finding the widest possible "street" that separates two different neighborhoods of data points.
  • How it works: SVMs identify "support vectors," which are the data points closest to the separating hyperplane. These support vectors are crucial because they define the position and orientation of the hyperplane. SVMs can also use "kernel tricks" to handle non-linear relationships by mapping data into higher dimensions where a linear separation becomes possible.
  • Use Cases:
    • Image classification (e.g., recognizing digits or objects).
    • Handwriting recognition.
    • Bioinformatics (e.g., protein classification).
    • Text categorization (e.g., classifying documents by topic).

Bibliography: