Supportive Vector Machine

𝑖=1 π‘›βˆ‘ λ𝑖* 𝑦𝑖 = 0
● π‘₯𝑖,π‘₯𝑗 π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘  𝑦𝑖,𝑦𝑗 π‘π‘™π‘Žπ‘ π‘ π‘’π‘  π‘π‘œπ‘‘β„Ž π‘Žπ‘Ÿπ‘’ π‘Žπ‘£π‘Žπ‘–π‘™π‘Žπ‘π‘™π‘’ 𝑖𝑛 π‘Ž π‘‘π‘Žπ‘‘π‘Ž
β–ͺ 𝑏𝑦 π‘Žπ‘π‘π‘™π‘¦π‘–π‘›π‘” π‘Žπ‘™π‘™ π‘œπ‘’π‘Ÿ π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘  𝑀𝑒 π‘π‘Žπ‘› 𝑔𝑒𝑑 λ𝑖
β–ͺ π‘“π‘œπ‘Ÿ π‘’π‘Žπ‘β„Ž π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘ 𝑀𝑒 𝑀𝑖𝑙𝑙 𝑔𝑒𝑑 Ξ»
π‘‡β„Žπ‘’ π‘”π‘œπ‘Žπ‘™ 𝑖𝑠 π‘›π‘œπ‘€ π‘‘π‘œ 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ π‘œπ‘π‘‘π‘–π‘šπ‘Žπ‘™ Ξ»
π‘‡β„Žπ‘’ λ𝑖 π‘£π‘Žπ‘™π‘’π‘’π‘  π‘‘π‘’π‘‘π‘’π‘Ÿπ‘šπ‘–π‘›π‘’ π‘€β„Žπ‘–π‘β„Ž π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘  π‘Žπ‘Ÿπ‘’ π‘ π‘’π‘π‘π‘œπ‘Ÿπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘  𝑖. 𝑒 Ξ» 𝑖> 0 𝑗=1 𝑛 βˆ‘ 𝑗=1 π‘›βˆ‘ Ξ» ( 𝑖 , 𝑦 , j) (π‘₯𝑖* π‘₯ 𝑗) 𝑀12 +𝑀22 = ||𝑀|| = (π‘Š * π‘Š)12 =12 𝑀 * 𝑀𝐿(π‘₯, 𝑦, Ξ») =12 𝑀 * 𝑀 βˆ’ Ξ» * 𝑦 * π‘€π‘œ + 𝑀𝑇( π‘₯)
𝑆𝑑𝑒𝑝 βˆ’ 1: π‘€π‘Žπ‘˜π‘’ π‘‘β„Žπ‘’ π‘π‘™π‘Žπ‘ π‘ π‘’π‘  π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› 𝑦 π‘€π‘œ + 𝑀𝑇( * π‘₯) > 1𝑦 = 1
π‘π‘™π‘Žπ‘ π‘ 1 π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘€π‘œ + 𝑀𝑇* π‘₯ > 1𝑦 =βˆ’ 1
π‘π‘™π‘Žπ‘ π‘ 2 π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘€π‘œ + 𝑀𝑇* π‘₯ < 1 𝑆𝑑𝑒𝑝 βˆ’ 2: πΌπ‘›π‘π‘Ÿπ‘’π‘Žπ‘ π‘’ π‘‘β„Žπ‘’ π‘šπ‘Žπ‘Ÿπ‘”π‘–π‘› π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ π‘œπ‘“ π‘‘β„Žπ‘’ π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘ π‘‘π‘œ π‘‘β„Žπ‘’ β„Žπ‘¦π‘π‘’π‘Ÿπ‘π‘™π‘Žπ‘›π‘’ 𝑑 =𝑀1 π‘₯1+𝑀 2𝑦1+π‘€π‘œ | | 𝑀 12+𝑀 22 =𝑀1π‘₯ 1+𝑀 2𝑦1+π‘€π‘œ | | ||π‘Š|| =π‘€π‘œ+𝑀𝑇π‘₯ ||π‘Š|| 𝑆𝑑𝑒𝑝 βˆ’ 3: π·π‘’π‘π‘Ÿπ‘’π‘Žπ‘ π‘’ π‘‘β„Žπ‘’ ||𝑀|| 𝑆𝑑𝑒𝑝 βˆ’ 4: π·π‘’π‘π‘Ÿπ‘’π‘Žπ‘ π‘’ π‘‘β„Žπ‘’ ||𝑀|| π‘π‘Žπ‘ π‘’π‘‘ π‘œπ‘› π‘π‘œπ‘›π‘ π‘‘π‘Ÿπ‘Žπ‘–π‘› 𝑦 π‘€π‘œ + 𝑀𝑇( * π‘₯) > 1 π‘Žπ‘π‘π‘™π‘¦ π‘™π‘’π‘”π‘Ÿπ‘Žπ‘›π‘”π‘’π‘ 
π‘₯, 𝑦, 𝑖=(2, 2) 1(4, 4) 1(4, 0) -1(0, 0) 1
π‘₯, 𝑖, Ξ»=(2, 2) 0.25 (4, 4) 0 (4, 0) 0.25 (0, 0) 0
π‘ π‘œ π‘ π‘’π‘π‘π‘œπ‘Ÿπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘  π‘Žπ‘Ÿπ‘’ (2, 2) π‘Žπ‘›π‘‘ (4, 0)
𝑀 =𝑖=1
π‘›βˆ‘ Ξ»=𝑖 , 𝑦 , 𝑖
π‘₯𝑖 = 0. 25 * 1 * (2, 2) + 0. 25 * (βˆ’ 1) * (4, 0) = (0. 5, 0. 5) βˆ’ (1, 0) = (0. 5 βˆ’ 1, 0. 5 βˆ’ 0) = (βˆ’ 0. 5
π‘€π‘œ + 𝑀𝑇
π‘₯ = 1π‘€π‘œ + 𝑀1* π‘₯
1 + 𝑀2* π‘₯
2 = 1 π‘€π‘œ βˆ’ 0. 5 * 2 + 0. 5 * 2
= 1 π‘€π‘œ βˆ’ 1 + 1
= 1 π‘€π‘œ
= 1βˆ’ 0. 5 * π‘₯1 + 0. 5 * π‘₯2 + 1

π‘ƒπ‘Žπ‘Ÿπ‘‘ βˆ’ 1
1𝐷, 2𝐷, 3𝐷 , 𝑁𝑑
1𝐷 π‘Ž 𝑠𝑖𝑛𝑔𝑙𝑒 π‘π‘œπ‘–π‘›π‘‘ : π‘₯ = 10
2𝐷 π‘Ž 𝐿𝑖𝑛𝑒 π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›: π‘₯ = 10, 𝑦 = 20
3𝐷 π‘Ž π‘ƒπ‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›: π‘₯ = 10, 𝑦 = 20, 𝑧 = 30
𝑁𝐷 π»π‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›: π‘₯, 𝑦, 𝑧, 𝑑…
𝐿𝑖𝑛𝑒 π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›: 𝑦 = π‘€π‘œ + 𝑀1

  • π‘₯1 2𝐷 π‘ƒπ‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›: 𝑦 = π‘€π‘œ + 𝑀1
  • π‘₯1 + 𝑀2
  • π‘₯2 3𝐷
    π»π‘¦π‘π‘’π‘Ÿ π‘ƒπ‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›: 𝑦 = π‘€π‘œ + 𝑀1
  • π‘₯1 + 𝑀2
  • π‘₯2 + … + 𝑀𝑛
  • π‘₯𝑛 𝑛𝐷
    𝐼𝑛 π‘Ÿπ‘’π‘Žπ‘™ π‘‘π‘–π‘šπ‘’ 𝑀𝑒 β„Žπ‘Žπ‘£π‘’ β„Žπ‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ 𝑙𝑖𝑛𝑒𝑠
    𝑆𝑉𝑀 π‘šπ‘Žπ‘–π‘› π‘Žπ‘–π‘š 𝑖𝑠 𝑖𝑑𝑒𝑛𝑖𝑑𝑓𝑦 π‘‘β„Žπ‘–π‘  β„Žπ‘¦π‘π‘’π‘Ÿπ‘π‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›
    πΊπ‘’π‘›π‘’π‘Ÿπ‘Žπ‘™π‘™π‘¦ π‘œπ‘’π‘Ÿ π‘šπ‘Žπ‘–π‘› π‘Žπ‘–π‘š 𝑖𝑠 𝑖𝑑𝑒𝑛𝑑𝑖𝑓𝑦 π‘‘β„Žπ‘’ π‘π‘œπ‘’π‘“π‘“π‘–π‘π‘–π‘’π‘›π‘‘π‘  π‘œπ‘Ÿ π‘€π‘’π‘–π‘”β„Žπ‘‘π‘ 
    𝑂𝐿𝑆 , 𝐺𝐷
    π‘†π‘’π‘šπ‘šπ‘Žπ‘‘π‘–π‘œπ‘› π‘“π‘œπ‘Ÿπ‘šπ‘Žπ‘‘
    𝑦 = π‘€π‘œ + 𝑀1
  • π‘₯1 + 𝑀2 π‘₯2 + … + 𝑀𝑛
  • π‘₯𝑛 = π‘€π‘œ +𝑖
  • =1 π‘›βˆ‘ 𝑀𝑖 π‘₯𝑖

π‘‰π‘’π‘π‘‘π‘œπ‘Ÿ π‘Ÿπ‘’π‘π‘Ÿπ‘’π‘ π‘’π‘›π‘‘π‘Žπ‘‘π‘œπ‘›
𝑦 = π‘€π‘œ + 𝑀1

  • π‘₯1 + 𝑀2
  • π‘₯2 + … + 𝑀𝑛
  • π‘₯𝑛 = π‘€π‘œ + π‘Š * 𝑋
    π‘€π‘Žπ‘‘π‘Ÿπ‘–π‘₯ π‘Ÿπ‘’π‘π‘Ÿπ‘’π‘ π‘’π‘›π‘‘π‘Žπ‘‘

𝑦 = π‘€π‘œ + 𝑀1

  • π‘₯1 + 𝑀2
  • π‘₯2 + … + 𝑀𝑛
  • π‘₯𝑛 = π‘€π‘œ + 𝑀 𝑇π‘₯ 𝑦 = π‘€π‘œ +𝑖=1 π‘›βˆ‘ 𝑀𝑖 π‘₯𝑖 (π‘ π‘’π‘šπ‘šπ‘Žπ‘‘π‘–π‘œπ‘›)
    𝑦 = π‘€π‘œ + π‘Š. 𝑋 ( π‘‰π‘’π‘π‘‘π‘œπ‘Ÿπ‘ )
    𝑦 = π‘€π‘œ + π‘Šπ‘‡π‘‹ (π‘€π‘Žπ‘‘π‘Ÿπ‘–π‘₯)

π‘ƒπ‘Žπ‘Ÿπ‘‘ βˆ’ 2:
𝑀𝑒 π‘Žπ‘™π‘Ÿπ‘’π‘Žπ‘‘π‘¦ π‘˜π‘›π‘œπ‘€ π‘‘β„Žπ‘Žπ‘‘ π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ 𝑏𝑒𝑑𝑀𝑒𝑒𝑛 π‘₯
1, 𝑦1( ) π‘₯2, 𝑦2( )
π‘€β„Žπ‘Žπ‘‘ 𝑖𝑠 π‘‘β„Žπ‘’ π‘π‘’π‘Ÿπ‘π‘’π‘›π‘‘π‘–π‘π‘’π‘™π‘Žπ‘Ÿ π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ 𝑏𝑒𝑑𝑀𝑒𝑒𝑛 π‘₯
1, 𝑦
1( ) π‘‘π‘œ π‘Žπ‘₯ + 𝑏𝑦 + 𝑐 = 0
𝑑 =π‘Žπ‘₯1+𝑏𝑦1| +𝑐|
π‘Ž2+𝑏2
π‘œπ‘Ÿπ‘–π‘”π‘–π‘›π‘Žπ‘™ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› = π‘€π‘œ + 𝑀1

  • π‘₯
    1 + 𝑀2
  • π‘₯2
    π‘π‘œπ‘–π‘›π‘‘ = (π‘₯1, π‘₯2)
    𝑑 =𝑀1π‘₯1+𝑀2π‘₯2+π‘€π‘œ| |𝑀12+𝑀2

2

𝑀1π‘₯1+𝑀2
π‘₯2+π‘€π‘œ| | ||π‘Š|| =π‘€π‘œ+π‘Šπ‘‡π‘‹||||||||π‘Š||
π‘ƒπ‘Žπ‘Ÿπ‘‘ βˆ’ 3:
π‘Žπ‘  𝑀𝑒 π‘˜π‘›π‘œπ‘€ 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ π‘šπ‘–π‘›π‘–π‘šπ‘’π‘š π‘π‘œπ‘–π‘›π‘‘ π‘Žπ‘›π‘‘ π‘šπ‘Žπ‘₯π‘–π‘šπ‘’π‘š π‘π‘œπ‘–π‘›π‘‘ π‘œπ‘“ π‘Žπ‘› π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›
π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ π‘šπ‘–π‘›π‘–π‘šπ‘’π‘š π‘π‘œπ‘–π‘›π‘‘ π‘œπ‘“ 𝑦 = π‘₯2
𝑀𝑒 π‘˜π‘›π‘œπ‘€ β„Žπ‘œπ‘€ π‘‘π‘œ π‘‘π‘œ
𝑏𝑒𝑑 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ 𝑓𝑖𝑛𝑑 π‘šπ‘–π‘›π‘–π‘šπ‘’π‘š π‘π‘œπ‘–π‘›π‘‘ π‘œπ‘Ÿ π‘šπ‘Žπ‘₯π‘–π‘šπ‘’π‘š π‘œπ‘“ π‘Žπ‘›π‘¦ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘π‘Žπ‘ π‘’π‘‘ π‘œπ‘› π‘Žπ‘›π‘œπ‘‘β„Žπ‘’π‘Ÿ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›
𝑓(π‘₯, 𝑦) = π‘₯2

  • 𝑦2 𝑔(π‘₯, 𝑦) = π‘₯ + 𝑦 βˆ’ 1 = 0
    𝑀𝑒 π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ 𝑓𝑖𝑛𝑑 π‘šπ‘Žπ‘₯π‘–π‘šπ‘’π‘š π‘£π‘Žπ‘™π‘’π‘’ π‘œπ‘“ 𝑓(π‘₯, 𝑦) π‘π‘Žπ‘ π‘’π‘‘ π‘œπ‘› π‘π‘œπ‘›π‘ π‘‘π‘Ÿπ‘Žπ‘–π‘› 𝑔(π‘₯, 𝑦)

πΏπ‘’π‘”π‘Ÿπ‘Žπ‘›π‘”π‘’π‘  π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘π‘Žπ‘‘π‘–π‘œπ‘› π‘‘β„Žπ‘’π‘œπ‘Ÿπ‘’π‘š
𝐿(π‘₯, 𝑦, Ξ») = 𝑓(π‘₯, 𝑦) βˆ’ Ξ» * 𝑔(π‘₯, 𝑦)
Ξ» = π‘™π‘’π‘”π‘Ÿπ‘Žπ‘›π‘”π‘’π‘  π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘’π‘Ÿ
𝑖𝑛 π‘œπ‘Ÿπ‘‘π‘’π‘Ÿ π‘‘π‘œ 𝑓𝑖𝑛𝑑 π‘₯, 𝑦 π‘£π‘Žπ‘™π‘’π‘’π‘ 
βˆ‚πΏ
βˆ‚π‘₯ = 0,
βˆ‚πΏ
βˆ‚π‘¦ = 0,
βˆ‚πΏ
βˆ‚Ξ» = 0
𝐿(π‘₯, 𝑦, Ξ») = π‘₯2

  • 𝑦2 βˆ’ Ξ» * [π‘₯ + 𝑦 βˆ’ 1]
    𝐿(π‘₯, 𝑦, Ξ») = π‘₯2
  • 𝑦2 βˆ’ Ξ»π‘₯ βˆ’ λ𝑦 + Ξ»βˆ‚πΏ
    βˆ‚π‘₯ =βˆ‚(π‘₯2+𝑦2βˆ’Ξ»π‘₯βˆ’Ξ»π‘¦+Ξ»)
    βˆ‚π‘₯ = 2π‘₯ βˆ’ Ξ»βˆ‚πΏ
    βˆ‚π‘¦ =βˆ‚(π‘₯2+𝑦2βˆ’Ξ»π‘₯βˆ’Ξ»π‘¦+Ξ»)
    βˆ‚π‘¦ = 2𝑦 βˆ’ Ξ»βˆ‚πΏ
    βˆ‚Ξ» =βˆ‚(π‘₯2+𝑦2βˆ’Ξ»π‘₯βˆ’Ξ»π‘¦+Ξ»)
    βˆ‚Ξ» =βˆ’ π‘₯ βˆ’ 𝑦 + 1
    𝑆𝑉𝑀 π‘šπ‘Žπ‘–π‘› π‘”π‘œπ‘Žπ‘™ 𝑖𝑠 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ β„Žπ‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘π‘Žπ‘ π‘’π‘‘ π‘œπ‘› π‘‘π‘–π‘“π‘“π‘’π‘Ÿπ‘’π‘›π‘‘ π‘π‘œπ‘–π‘›π‘‘π‘  π‘“π‘Ÿπ‘œπ‘š π‘‘π‘–π‘“π‘“π‘’π‘Ÿπ‘’π‘›π‘‘ π‘π‘™π‘Žπ‘ π‘ π‘’π‘ 
    π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘‘β„Žπ‘’π‘Ÿπ‘’ π‘‘π‘€π‘œ π‘π‘™π‘Žπ‘ π‘ π‘’π‘  𝑦𝑒𝑠 π‘Žπ‘›π‘‘ π‘π‘œ
    𝑀𝑒 π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘ π‘’π‘π‘’π‘Ÿπ‘Žπ‘‘π‘’ π‘‘β„Žπ‘’π‘ π‘’ π‘‘π‘€π‘œ π‘π‘™π‘Žπ‘ π‘ π‘’π‘  π‘π‘œπ‘Ÿπ‘Ÿπ‘’π‘π‘‘π‘™π‘¦

1) π»π‘’π‘Ÿπ‘’ 𝑀𝑒 𝑛𝑒𝑒𝑑 π‘‘π‘œ 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ π»π‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘€β„Žπ‘–π‘β„Ž π‘π‘™π‘Žπ‘ π‘ π‘–π‘“π‘¦ π‘π‘œπ‘Ÿπ‘Ÿπ‘’π‘π‘‘π‘™π‘¦ π‘‘π‘€π‘œ π‘π‘™π‘Žπ‘ π‘ π‘’π‘ 
2) π‘‡β„Žπ‘Žπ‘‘ β„Žπ‘¦π‘π‘’π‘Ÿπ‘π‘™π‘Žπ‘›π‘’ π‘ β„Žπ‘œπ‘’π‘™π‘‘ π‘šπ‘Žπ‘–π‘›π‘‘π‘Žπ‘–π‘› π‘Ž π‘šπ‘Žπ‘₯π‘–π‘šπ‘’π‘š π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ π‘“π‘Ÿπ‘œπ‘š π‘π‘œπ‘Ÿπ‘‘π‘’π‘Ÿ π‘π‘œπ‘–π‘›π‘‘
3) π‘‡β„Žπ‘’π‘ π‘’ π‘π‘œπ‘Ÿπ‘‘π‘’π‘Ÿ π‘π‘œπ‘–π‘›π‘‘π‘  π‘Žπ‘Ÿπ‘’ π‘π‘Žπ‘™π‘™π‘’π‘‘ π‘Žπ‘  π‘†π‘’π‘π‘π‘œπ‘Ÿπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘ 

π‘‚π‘’π‘Ÿ π‘šπ‘Žπ‘–π‘› π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› 𝑖𝑠 𝑦: π‘€π‘œ + 𝑀𝑇π‘₯

π‘€π‘œ + 𝑀𝑇
π‘₯ = 0 π»π‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›
π‘‡β„Žπ‘–π‘  β„Žπ‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ 𝑑𝑖𝑣𝑖𝑑𝑒𝑠 π‘’π‘›π‘‘π‘–π‘Ÿπ‘’ π‘ π‘π‘Žπ‘π‘’ π‘–π‘›π‘‘π‘œ π‘‘π‘€π‘œ π‘π‘Žπ‘Ÿπ‘‘π‘  : π‘π‘™π‘Žπ‘ π‘  βˆ’ 1 π‘Žπ‘›π‘‘ π‘π‘™π‘Žπ‘ π‘  βˆ’ 2
π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘₯1 𝑖𝑠 π‘œπ‘›π‘’ π‘π‘œπ‘–π‘›π‘‘ , 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘’π‘’π‘ π‘‘β„Žπ‘–π‘  π‘π‘œπ‘–π‘›π‘‘ 𝑖𝑛 𝑐1 : π‘€π‘œ + 𝑀𝑇π‘₯1 > 0
π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘₯1 𝑖𝑠 π‘œπ‘›π‘’ π‘π‘œπ‘–π‘›π‘‘ , 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘’π‘’π‘ π‘‘β„Žπ‘–π‘  π‘π‘œπ‘–π‘›π‘‘ 𝑖𝑛 𝑐2 : π‘€π‘œ + 𝑀𝑇π‘₯
1 < 0 π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘₯ 1 𝑖𝑠 π‘œπ‘›π‘’ π‘π‘œπ‘–π‘›π‘‘ , 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘’π‘’π‘ π‘‘β„Žπ‘–π‘  π‘π‘œπ‘–π‘›π‘‘ 𝑖𝑛 𝑐 1 : π‘€π‘œ + 𝑀𝑇π‘₯ 1 > 0
π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘₯1 𝑖𝑠 π‘œπ‘›π‘’ π‘π‘œπ‘–π‘›π‘‘ , 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘’π‘’π‘ π‘‘β„Žπ‘–π‘  π‘π‘œπ‘–π‘›π‘‘ 𝑖𝑛 𝑐2 : π‘€π‘œ + 𝑀𝑇π‘₯1 < 0
π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘₯1 𝑖𝑠 π‘œπ‘›π‘’ π‘π‘œπ‘–π‘›π‘‘ , 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘’π‘’π‘ π‘‘β„Žπ‘–π‘  π‘π‘œπ‘–π‘›π‘‘ 𝑖𝑛 𝑐1 : π‘€π‘œ + 𝑀𝑇π‘₯1 = 1
π‘“π‘œπ‘Ÿ 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’ π‘₯1 𝑖𝑠 π‘œπ‘›π‘’ π‘π‘œπ‘–π‘›π‘‘ , 𝑖𝑓 π‘¦π‘œπ‘’ π‘€π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘’π‘’π‘ π‘‘β„Žπ‘–π‘  π‘π‘œπ‘–π‘›π‘‘ 𝑖𝑛 𝑐2 : π‘€π‘œ + 𝑀𝑇π‘₯1 =βˆ’ 1

π‘€π‘œ + 𝑀𝑇π‘₯1 = 0 ( π‘‘β„Žπ‘–π‘  𝑀𝑒 π‘€π‘Žπ‘›π‘‘)
π‘€π‘œ + 𝑀𝑇π‘₯1 = 1
π‘€π‘œ + 𝑀𝑇π‘₯1 =βˆ’ 1
𝑦 * π‘€π‘œ + 𝑀𝑇π‘₯ ( 1) > 1𝑦 = 𝑦𝑒𝑠 (+ 1) 𝑦 = π‘›π‘œ (βˆ’ 1)
𝑦 = 1 ====== > π‘€π‘œ + 𝑀𝑇π‘₯ ( 1) > 1 (π‘π‘™π‘Žπ‘ π‘  βˆ’ 1)
𝑦 =βˆ’ 1 ====== > π‘€π‘œ + 𝑀𝑇π‘₯ ( 1) < 1 : (π‘π‘™π‘Žπ‘ π‘  βˆ’ 2)
𝑦 * π‘€π‘œ + 𝑀𝑇π‘₯ ( 1)β‰₯1 π‘”π‘œπ‘Žπ‘™ β„Žπ‘’π‘Ÿπ‘’ 𝑖𝑠 𝑓𝑖𝑛𝑑 π‘œπ‘’π‘‘ π‘‘β„Žπ‘’ π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘ , π‘€β„Žπ‘–π‘β„Ž π‘Žπ‘Ÿπ‘’ π‘ π‘Žπ‘‘π‘–π‘’π‘ π‘“π‘–π‘’π‘  π‘€π‘œ + 𝑀
𝑇π‘₯1 = 1π‘€π‘œ + 𝑀𝑇π‘₯1 =βˆ’ 1
Objective of SVM: SVM aims to find the hyperplane that maximizes the margin, making the classifier
as robust as possible.
π‘”π‘œπ‘Žπ‘™ : 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ 𝑏𝑒𝑑𝑀𝑒𝑒𝑛 π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘ π‘‘π‘œ π‘‘β„Žπ‘’ β„Žπ‘¦π‘π‘’π‘Ÿπ‘π‘™π‘Žπ‘›π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›
π‘šπ‘Žπ‘˜π‘’ π‘ π‘’π‘Ÿπ‘’ π‘‘β„Žπ‘’ π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ π‘ β„Žπ‘œπ‘’π‘™π‘‘ 𝑏𝑒 π‘šπ‘Žπ‘₯π‘–π‘šπ‘’π‘š
Support Vectors:
● Definition: Support vectors are the data points that lie closest to the decision boundary
(hyperplane). These points are the most challenging to classify correctly and are the key
points that define the position and orientation of the hyperplane.
Importance:
● The support vectors are critical because they are the points that “support” the optimal
hyperplane. In fact, the SVM model is entirely defined by these support vectors. The position
of all other data points is irrelevant as long as they are correctly classified by the hyperplane.

● If you remove a support vector from the dataset, the hyperplane could shift, potentially
changing the classification of some other points. However, removing a non-support vector
point will not affect the hyperplane.

  1. Margin Maximization:
    ● Definition: The margin is the distance between the hyperplane and the nearest data points
    from any class (i.e., the support vectors). In a binary classification problem, there will be a
    margin on either side of the hyperplane.
    ● Objective of SVM: SVM aims to find the hyperplane that maximizes this margin, making the
    classifier as robust as possible.
    ● Why Maximize the Margin?:
    o Generalization: A larger margin implies that the model has more confidence in its
    classification decisions. It reduces the risk of overfitting because the model is less
    sensitive to slight variations in the data points.
    o Robustness: A wider margin means the model is better at generalizing to unseen
    data. If new data points are added, they are more likely to be classified correctly if the margin is large.
  2. Mathematical Perspective
    π·π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ 𝑏𝑒𝑑𝑀𝑒𝑒𝑛 π‘ π‘’π‘π‘π‘œπ‘Ÿπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿ π‘‘π‘Žπ‘‘π‘Žπ‘π‘œπ‘–π‘›π‘‘π‘  π‘‘π‘œ β„Žπ‘¦π‘π‘’π‘Ÿ π‘π‘™π‘Žπ‘›π‘’ 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦
    𝑑 =𝑀1π‘₯1+𝑀2𝑦1+π‘€π‘œ| |𝑀12+𝑀2

2

𝑀1π‘₯1+𝑀2𝑦1+π‘€π‘œ| | ||π‘Š|| =π‘€π‘œ+𝑀𝑇π‘₯||||||||π‘Š||
π‘€π‘Žπ‘Ÿπ‘”π‘–π‘›
π‘€π‘œ+𝑀𝑇π‘₯||||||||π‘Š||
π‘‘π‘Ÿ = ||π‘Š|| π‘šπ‘–π‘›π‘–π‘’π‘š π‘π‘œπ‘›π‘ π‘‘π‘Ÿπ‘Žπ‘–π‘› π‘‘π‘œ 𝑦 * π‘€π‘œ + 𝑀
𝑇( π‘₯)β‰₯1

π‘‚π‘π‘‘π‘–π‘šπ‘–π‘§π‘Žπ‘‘π‘–π‘œπ‘›:
π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π‘‘β„Žπ‘’ = ||π‘Š|| π‘“π‘œπ‘Ÿ π‘šπ‘Žπ‘‘β„Ž π‘ π‘–π‘šπ‘π‘™π‘–π‘π‘–π‘‘π‘¦ 𝑖𝑛 𝑑𝑒π‘₯𝑑 π‘π‘œπ‘œπ‘˜π‘  π‘Ÿπ‘’π‘‘π‘’π‘Ÿπ‘› π‘Žπ‘  =12 𝑀2
π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π‘‘β„Žπ‘’ π‘‘β„Žπ‘’ 12 𝑀2
𝑠𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ π‘‘β„Žπ‘’ π‘π‘œπ‘›π‘ π‘‘π‘Ÿπ‘Žπ‘–π‘› 𝑦 * π‘€π‘œ + 𝑀
𝑇( π‘₯)𝐿(π‘₯, 𝑦, Ξ») = 𝑓(π‘₯, 𝑦) βˆ’ Ξ» * 𝑔(π‘₯, 𝑦)𝐿 π‘€π‘œ
( , 𝑀, Ξ») = ||π‘Š|| βˆ’ Ξ» * [𝑦 * π‘€π‘œ + 𝑀𝑇( π‘₯) βˆ’ 1]𝐿 𝑏
π‘œ( , 𝑀, Ξ») = ||π‘Š|| βˆ’ Ξ» * [𝑦 * π‘π‘œ + 𝑀𝑇( π‘₯) βˆ’ 1]𝐿 π‘π‘œ( , 𝑏, Ξ») = ||π‘Š|| βˆ’ Ξ» * [𝑦 * π‘π‘œ + 𝑏𝑇( π‘₯) βˆ’ 1]
πΉπ‘œπ‘Ÿ π‘šπ‘Žπ‘›π‘¦ π‘‘π‘Žπ‘‘π‘Žπ‘π‘œπ‘–π‘›π‘‘π‘ 
𝐿 π‘€π‘œ( , 𝑀, Ξ») = ||π‘Š|| βˆ’π‘–=1π‘›βˆ‘ λ𝑖

[π‘¦π‘–π‘€π‘œ + 𝑀 * π‘₯𝑖( ) βˆ’ 1]𝐿 π‘€π‘œ( , 𝑀, Ξ») =12 𝑀2 βˆ’π‘–=1π‘›βˆ‘ λ𝑖[π‘¦π‘–π‘€π‘œ + 𝑀 * π‘₯𝑖( ) βˆ’ 1]

  • π‘€β„Žπ‘’π‘Ÿπ‘’ λ𝑖 𝑖𝑠 π‘‘β„Žπ‘’ πΏπ‘Žπ‘”π‘Ÿπ‘Žπ‘›π‘”π‘’ π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘’π‘Ÿπ‘  π‘Žπ‘ π‘ π‘œπ‘π‘–π‘Žπ‘‘π‘’π‘‘ π‘€π‘–π‘‘β„Ž π‘’π‘Žπ‘β„Ž π‘π‘œπ‘›π‘ π‘‘π‘Ÿπ‘Žπ‘–π‘›π‘‘
    πΆπ‘Žπ‘ π‘’ βˆ’ 1:𝑑𝐿𝑑𝑀 = 0
    𝑑𝐿𝑑𝑀 = 𝑀 βˆ’π‘–=1π‘›βˆ‘ λ𝑖 𝑦𝑖π‘₯𝑖 = 0
  • 𝑀 =𝑖=1π‘›βˆ‘ λ𝑖𝑦𝑖π‘₯𝑖
    π‘‡β„Žπ‘–π‘  π‘ β„Žπ‘œπ‘€π‘  π‘‘β„Žπ‘Žπ‘‘ π‘‘β„Žπ‘’ π‘€π‘’π‘–π‘”β„Žπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿ 𝑀 𝑖𝑠 π‘Ž π‘™π‘–π‘›π‘’π‘Žπ‘Ÿ π‘π‘œπ‘šπ‘π‘–π‘›π‘Žπ‘‘π‘–π‘œπ‘› π‘œπ‘“ π‘‘β„Žπ‘’ π‘‘π‘Ÿπ‘Žπ‘–π‘›π‘–π‘›π‘” 𝑒π‘₯π‘Žπ‘šπ‘π‘™π‘’π‘ ,
    π‘€β„Žπ‘’π‘Ÿπ‘’ π‘‘β„Žπ‘’ π‘π‘œπ‘’π‘“π‘“π‘–π‘π‘–π‘’π‘›π‘‘π‘  π‘Žπ‘Ÿπ‘’ 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦 π‘‘β„Žπ‘’ πΏπ‘Žπ‘”π‘Ÿπ‘Žπ‘›π‘”π‘’ π‘šπ‘’π‘™π‘‘π‘–π‘π‘™π‘–π‘’π‘Ÿπ‘  λ𝑖.

πΆπ‘Žπ‘ π‘’ βˆ’ 2: π‘‘πΏπ‘‘π‘€π‘œ= 0𝐿 π‘€π‘œ( , 𝑀, Ξ») =12 𝑀2 βˆ’π‘–=1π‘›βˆ‘ λ𝑖[π‘¦π‘–π‘€π‘œ + 𝑀 * π‘₯𝑖( ) βˆ’1]π‘‘πΏπ‘‘π‘€π‘œ=βˆ’π‘–=1π‘›βˆ‘Ξ»π‘–π‘¦π‘–=0𝑖=1π‘›βˆ‘Ξ»π‘–π‘¦π‘– = 0
π‘π‘œπ‘€ 𝑠𝑒𝑏𝑠𝑑𝑖𝑒𝑑𝑒 𝑀 =𝑖=1𝑛 βˆ‘Ξ»π‘–π‘¦π‘–π‘₯𝑖
π‘œπ‘› πΏπ‘’π‘”π‘Ÿπ‘Žπ‘›π‘”π‘’π‘  π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›
𝐿 π‘€π‘œ( , 𝑀, Ξ») =12 𝑀2 βˆ’π‘–=1π‘›βˆ‘ λ𝑖[π‘¦π‘–π‘€π‘œ + 𝑀 * π‘₯𝑖( ) βˆ’ 1] π‘šπ‘Žπ‘₯π‘–π‘šπ‘–π‘§π‘’

  • 𝑖=1π‘›βˆ‘ λ𝑖 βˆ’12𝑗=1π‘›βˆ‘π‘—=1π‘›βˆ‘ λ𝑖λ𝑗𝑦𝑖 𝑦𝑗(π‘₯𝑖,π‘₯𝑗) 𝑠𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ
  • 𝑖=1π‘›βˆ‘ λ𝑖𝑦𝑖 = 0π‘₯𝑖 π‘₯𝑗 π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘  𝑦𝑖𝑦𝑗 π‘π‘™π‘Žπ‘ π‘ π‘’π‘  π‘π‘œπ‘‘β„Ž π‘Žπ‘Ÿπ‘’ π‘Žπ‘£π‘Žπ‘–π‘™π‘Žπ‘π‘™π‘’ 𝑖𝑛 π‘Ž π‘‘π‘Žπ‘‘π‘Žπ‘π‘¦ π‘Žπ‘π‘π‘™π‘¦π‘–π‘›π‘” π‘Žπ‘™π‘™ π‘œπ‘’π‘Ÿ π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘  𝑀𝑒 π‘π‘Žπ‘› 𝑔𝑒𝑑 λ𝑖 π‘“π‘œπ‘Ÿ π‘’π‘Žπ‘β„Ž π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘ 𝑀𝑒 𝑀𝑖𝑙𝑙 𝑔𝑒𝑑 Ξ».
  • π‘‡β„Žπ‘’ π‘”π‘œπ‘Žπ‘™ 𝑖𝑠 π‘›π‘œπ‘€ π‘‘π‘œ 𝑓𝑖𝑛𝑑 π‘‘β„Žπ‘’ π‘œπ‘π‘‘π‘–π‘šπ‘Žπ‘™ Ξ».
  • π‘‡β„Žπ‘’ λ𝑖 π‘£π‘Žπ‘™π‘’π‘’π‘  π‘‘π‘’π‘‘π‘’π‘Ÿπ‘šπ‘–π‘›π‘’ π‘€β„Žπ‘–π‘β„Ž π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘π‘  π‘Žπ‘Ÿπ‘’ π‘ π‘’π‘π‘π‘œπ‘Ÿπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘  𝑖. 𝑒 λ𝑖 0

𝑗=1 𝑛 βˆ‘π‘—=1 𝑛 βˆ‘Ξ»π‘– λ𝑗𝑦𝑖𝑦𝑗(π‘₯𝑖 π‘₯𝑗)𝑀12

  • 𝑀22 = ||𝑀|| = (π‘Š * π‘Š)12 =12 𝑀 * 𝑀

𝐿(π‘₯, 𝑦, Ξ») =12 𝑀 * 𝑀 βˆ’ Ξ» * 𝑦 * π‘€π‘œ + 𝑀𝑇( π‘₯)
𝑆𝑑𝑒𝑝 βˆ’ 1: π‘€π‘Žπ‘˜π‘’ π‘‘β„Žπ‘’ π‘π‘™π‘Žπ‘ π‘ π‘’π‘  π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› 𝑦 π‘€π‘œ + 𝑀𝑇( * π‘₯) > 1
𝑦 = 1 π‘π‘™π‘Žπ‘ π‘  1 π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘€π‘œ + 𝑀𝑇

  • π‘₯ > 1
    𝑦 =βˆ’ 1 π‘π‘™π‘Žπ‘ π‘  2
    π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› π‘€π‘œ + 𝑀𝑇
  • π‘₯ < 1
    𝑆𝑑𝑒𝑝 βˆ’ 2: πΌπ‘›π‘π‘Ÿπ‘’π‘Žπ‘ π‘’ π‘‘β„Žπ‘’ π‘šπ‘Žπ‘Ÿπ‘”π‘–π‘› π‘‘π‘–π‘ π‘‘π‘Žπ‘›π‘π‘’ π‘œπ‘“ π‘‘β„Žπ‘’ π‘‘π‘Žπ‘‘π‘Ž π‘π‘œπ‘–π‘›π‘‘ π‘‘π‘œ π‘‘β„Žπ‘’ β„Žπ‘¦π‘π‘’π‘Ÿπ‘π‘™π‘Žπ‘›π‘’
    𝑑 =𝑀1π‘₯1+𝑀2𝑦1+π‘€π‘œ| |𝑀12+𝑀2
  • 𝑀1π‘₯1+𝑀2𝑦1+π‘€π‘œ| | ||π‘Š|| =π‘€π‘œ+𝑀𝑇π‘₯||||||||π‘Š||
  • 𝑆𝑑𝑒𝑝 βˆ’ 3: π·π‘’π‘π‘Ÿπ‘’π‘Žπ‘ π‘’ π‘‘β„Žπ‘’ ||𝑀||
  • 𝑆𝑑𝑒𝑝 βˆ’ 4: π·π‘’π‘π‘Ÿπ‘’π‘Žπ‘ π‘’ π‘‘β„Žπ‘’ ||𝑀|| π‘π‘Žπ‘ π‘’π‘‘ π‘œπ‘› π‘π‘œπ‘›π‘ π‘‘π‘Ÿπ‘Žπ‘–π‘› 𝑦 π‘€π‘œ + 𝑀𝑇
  • ( * π‘₯) > 1 π‘Žπ‘π‘π‘™π‘¦ π‘™π‘’π‘”π‘Ÿπ‘Žπ‘›π‘”π‘’π‘ 
  • π‘₯𝑖𝑦𝑖

(2, 2) 1
(4, 4) 1
(4, 0) -1
(0, 0) 1
π‘₯𝑖λ
(2, 2) 0.25
(4, 4) 0
(4, 0) 0.25
(0, 0) 0
π‘ π‘œ π‘ π‘’π‘π‘π‘œπ‘Ÿπ‘‘ π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘  π‘Žπ‘Ÿπ‘’ (2, 2)π‘Žπ‘›π‘‘ (4, 0)

𝑀 =
𝑖=1π‘›βˆ‘ λ𝑖𝑦𝑖π‘₯
𝑖 = 0. 25 * 1 * (2, 2) + 0. 25 * (βˆ’ 1) * (4, 0) = (0. 5, 0. 5) βˆ’ (1, 0) = (0. 5 βˆ’ 1, 0. 5 βˆ’ 0(βˆ’0.5π‘€π‘œ+ 𝑀𝑇π‘₯ = 1π‘€π‘œ + 𝑀1

  • π‘₯1 + 𝑀2
  • π‘₯2 = 1
    π‘€π‘œ βˆ’ 0. 5 * 2 + 0. 5 * 2 = 1
    π‘€π‘œ βˆ’ 1 + 1 = 1
    π‘€π‘œ = 1βˆ’ 0. 5 * π‘₯1 + 0. 5 * π‘₯2 + 1

What Is a Support Vector Machine?
Support Vector Machine is designed to find the optimal hyperplane that separates data points of different classes with the maximum margin. It’s especially effective in high-dimensional spaces and for problems where the decision boundary is not linear.

πŸ” Key Concepts

  • Hyperplane: A decision boundary that separates different classes. In 2D, it’s a line; in 3D, it’s a plane.
  • Support Vectors: The data points closest to the hyperplane. These are critical in defining the margin.
  • Margin: The distance between the hyperplane and the support vectors. SVM aims to maximize this.
  • Kernel Trick: A method to transform data into higher dimensions to make it linearly separable. Common kernels include linear, polynomial, and radial basis function (RBF).
  • Hard Margin vs. Soft Margin:
  • Hard Margin: Assumes perfect separation with no misclassification.
  • Soft Margin: Allows some misclassification to handle noisy data better.
  • Regularization (C): Controls the trade-off between maximizing the margin and minimizing classification error.

βš™οΈ How It Works

  1. SVM identifies the hyperplane that best separates the classes.
  2. It uses support vectors to define the margin.
  3. If data isn’t linearly separable, it applies the kernel trick to map it into a higher-dimensional space.
  4. Solves an optimization problem to find the best hyperplane.

Support Vector Machine (SVM) is a supervised machine learning algorithm used for classification and regression tasks. It tries to find the best boundary known as hyperplane that separates different classes in the data. It is useful when you want to do binary classification like spam vs. not spam or cat vs. dog.

The main goal of SVM is to maximize the margin between the two classes. The larger the margin the better the model performs on new and unseen data.

support_vectors_hyperplane.webp
support_vectors_hyperplane.webp

Key Concepts of Support Vector Machine

  • Hyperplane: A decision boundary separating different classes in feature space and is represented by the equation wx + b = 0 in linear classification.
  • Support Vectors: The closest data points to the hyperplane, crucial for determining the hyperplane and margin in SVM.
  • Margin: The distance between the hyperplane and the support vectors. SVM aims to maximize this margin for better classification performance.
  • Kernel: A function that maps data to a higher-dimensional space enabling SVM to handle non-linearly separable data.
  • Hard Margin: A maximum-margin hyperplane that perfectly separates the data without misclassifications.
  • Soft Margin: Allows some misclassifications by introducing slack variables, balancing margin maximization and misclassification penalties when data is not perfectly separable.
  • C: A regularization term balancing margin maximization and misclassification penalties. A higher C value forces stricter penalty for misclassifications.
  • Hinge Loss: A loss function penalizing misclassified points or margin violations and is combined with regularization in SVM.
  • Dual Problem: Involves solving for Lagrange multipliers associated with support vectors, facilitating the kernel trick and efficient computation.

How does Support Vector Machine Algorithm Work?

The key idea behind the SVM algorithm is to find the hyperplane that best separates two classes by maximizing the margin between them. This margin is the distance from the hyperplane to the nearest data points (support vectors) on each side.

SVM
Multiple hyperplanes separate the data from two classes

The best hyperplane also known as the “hard margin” is the one that maximizes the distance between the hyperplane and the nearest data points from both classes. This ensures a clear separation between the classes. So from the above figure, we choose L2 as hard margin. Let’s consider a scenario like shown below:

2
Selecting hyperplane for data with outlier

Here, we have one blue ball in the boundary of the red ball.

How does SVM classify the data?

The blue ball in the boundary of red ones is an outlier of blue balls. The SVM algorithm has the characteristics to ignore the outlier and finds the best hyperplane that maximizes the margin. SVM is robust to outliers.

3
Hyperplane which is the most optimized one

A soft margin allows for some misclassifications or violations of the margin to improve generalization. The SVM optimizes the following equation to balance margin maximization and penalty minimization:

Objective Function=(1margin)+Ξ»βˆ‘penalty Objective Function=(margin1​)+Ξ»βˆ‘penalty 

The penalty used for violations is often hinge loss which has the following behavior:

  • If a data point is correctly classified and within the margin there is no penalty (loss = 0).
  • If a point is incorrectly classified or violates the margin the hinge loss increases proportionally to the distance of the violation.

Till now we were talking about linearly separable data that seprates group of blue balls and red balls by a straight line/linear line.

What if data is not linearly separable?

When data is not linearly separable i.e it can’t be divided by a straight line, SVM uses a technique called kernels to map the data into a higher-dimensional space where it becomes separable. This transformation helps SVM find a decision boundary even for non-linear data.

4
Original 1D dataset for classification

A kernel is a function that maps data points into a higher-dimensional space without explicitly computing the coordinates in that space. This allows SVM to work efficiently with non-linear data by implicitly performing the mapping. For example consider data points that are not linearly separable. By applying a kernel function SVM transforms the data points into a higher-dimensional space where they become linearly separable.

  • Linear Kernel: For linear separability.
  • Polynomial Kernel: Maps data into a polynomial space.
  • Radial Basis Function (RBF) Kernel: Transforms data into a space based on distances between data points.
5
Mapping 1D data to 2D to become able to separate the two classes

In this case the new variable y is created as a function of distance from the origin.

Mathematical Computation of SVM

Consider a binary classification problem with two classes, labeled as +1 and -1. We have a training dataset consisting of input feature vectors X and their corresponding class labels Y. The equation for the linear hyperplane can be written as:

wTx+b=0wTx+b=0

Where:

  • wwΒ is the normal vector to the hyperplane (the direction perpendicular to it).
  • bbΒ is the offset or bias term representing the distance of the hyperplane from the origin along the normal vectorΒ ww.

Distance from a Data Point to the Hyperplane

The distance between a data point xixi​and the decision boundary can be calculated as:

di=wTxi+b∣∣w∣∣di​=∣∣w∣∣wTxi​+b​

where ||w|| represents the Euclidean norm of the weight vector w.

Linear SVM Classifier

Distance from a Data Point to the Hyperplane:

y^={1: wTx+bβ‰₯00:  wTx+b <0y^​={10​: wTx+bβ‰₯0:  wTx+b <0​

Where y^y^​ is the predicted label of a data point.

Optimization Problem for SVM

For a linearly separable dataset the goal is to find the hyperplane that maximizes the margin between the two classes while ensuring that all data points are correctly classified. This leads to the following optimization problem:

minimizew,b12βˆ₯wβˆ₯2w,bminimize​21​βˆ₯wβˆ₯2

Subject to the constraint:

yi(wTxi+b)β‰₯1fori=1,2,3,β‹―,myi​(wTxi​+b)β‰₯1fori=1,2,3,β‹―,m

Where:

  • yiyi​​ is the class label (+1 or -1) for each training instance.
  • xixi​​ is the feature vector for theΒ ii-th training instance.
  • mmΒ is the total number of training instances.

The condition yi(wTxi+b)β‰₯1yi​(wTxi​+b)β‰₯1 ensures that each data point is correctly classified and lies outside the margin.

Soft Margin in Linear SVM Classifier

In the presence of outliers or non-separable data the SVM allows some misclassification by introducing slack variables ΞΆiΞΆi​​. The optimization problem is modified as:

minimize w,b12βˆ₯wβˆ₯2+Cβˆ‘i=1mΞΆiw,bminimize β€‹21​βˆ₯wβˆ₯2+Cβˆ‘i=1m​΢i​

Subject to the constraints:

yi(wTxi+b)β‰₯1βˆ’ΞΆiandΞΆiβ‰₯0for i=1,2,…,myi​(wTxi​+b)β‰₯1βˆ’ΞΆi​andΞΆi​β‰₯0for i=1,2,…,m

Where:

  • CCΒ is a regularization parameter that controls the trade-off between margin maximization and penalty for misclassifications.
  • ΞΆiΞΆi​​ are slack variables that represent the degree of violation of the margin by each data point.

Dual Problem for SVM

The dual problem involves maximizing the Lagrange multipliers associated with the support vectors. This transformation allows solving the SVM optimization using kernel functions for non-linear classification.

The dual objective function is given by:

maximize Ξ±12βˆ‘i=1mβˆ‘j=1mΞ±iΞ±jtitjK(xi,xj)βˆ’βˆ‘i=1mΞ±iΞ±maximize β€‹21β€‹βˆ‘i=1mβ€‹βˆ‘j=1m​αi​αj​ti​tj​K(xi​,xj​)βˆ’βˆ‘i=1m​αi​

Where:

  • Ξ±iΞ±i​​ are the Lagrange multipliers associated with theΒ ithithΒ training sample.
  • titi​​ is the class label for theΒ ithith-th training sample.
  • K(xi,xj)K(xi​,xj​)Β is the kernel function that computes the similarity between data pointsΒ xixi​​ andΒ xjxj​​. The kernel allows SVM to handle non-linear classification problems by mapping data into a higher-dimensional space.

The dual formulation optimizes the Lagrange multipliers Ξ±iΞ±i​​ and the support vectors are those training samples where Ξ±i>0Ξ±i​>0.

SVM Decision Boundary

Once the dual problem is solved, the decision boundary is given by:

w=βˆ‘i=1mΞ±itiK(xi,x)+bw=βˆ‘i=1m​αi​ti​K(xi​,x)+b

Where ww is the weight vector, xx is the test data point and bb is the bias term. Finally the bias term bb is determined by the support vectors, which satisfy:

ti(wTxiβˆ’b)=1β‡’b=wTxiβˆ’titi​(wTxiβ€‹βˆ’b)=1β‡’b=wTxiβ€‹βˆ’ti​

Where xixi​​ is any support vector.

This completes the mathematical framework of the Support Vector Machine algorithm which allows for both linear and non-linear classification using the dual problem and kernel trick.

Types of Support Vector Machine

Based on the nature of the decision boundary, Support Vector Machines (SVM) can be divided into two main parts:

  • Linear SVM:Β Linear SVMs use a linear decision boundary to separate the data points of different classes. When the data can be precisely linearly separated, linear SVMs are very suitable. This means that a single straight line (in 2D) or a hyperplane (in higher dimensions) can entirely divide the data points into their respective classes. A hyperplane that maximizes the margin between the classes is the decision boundary.
  • Non-Linear SVM:Β Non-Linear SVMΒ can be used to classify data when it cannot be separated into two classes by a straight line (in the case of 2D). By using kernel functions, nonlinear SVMs can handle nonlinearly separable data. The original input data is transformed by these kernel functions into a higher-dimensional feature space where the data points can be linearly separated. A linear SVM is used to locate a nonlinear decision boundary in this modified space.Β 

Implementing SVM Algorithm Using Scikit-Learn

We will predict whether cancer is Benign or Malignant using historical data about patients diagnosed with cancer. This data includes independent attributes such as tumor size, texture, and others. To perform this classification, we will use an SVM (Support Vector Machine) classifier to differentiate between benign and malignant cases effectively.

  • load_breast_cancer():Β Loads the breast cancer dataset (features and target labels).
  • SVC(kernel=”linear”, C=1): Creates a Support Vector Classifier with a linear kernel and regularization parameter C=1.
  • svm.fit(X, y):Β Trains the SVM model on the feature matrix X and target labels y.
  • DecisionBoundaryDisplay.from_estimator():Β Visualizes the decision boundary of the trained model with a specified color map.
  • plt.scatter():Β Creates a scatter plot of the data points, colored by their labels.
  • plt.show():Β Displays the plot to the screen.

from sklearn.datasets import load_breast_cancer import matplotlib.pyplot as plt from sklearn.inspection import DecisionBoundaryDisplay from sklearn.svm import SVC cancer = load_breast_cancer() X = cancer.data[:, :2] y = cancer.target svm = SVC(kernel="linear", C=1) svm.fit(X, y) DecisionBoundaryDisplay.from_estimator( svm, X, response_method="predict", alpha=0.8, cmap="Pastel1", xlabel=cancer.feature_names[0], ylabel=cancer.feature_names[1], ) plt.scatter(X[:, 0], X[:, 1], c=y, s=20, edgecolors="k") plt.show()

Output:

svm
SVM

Advantages of Support Vector Machine (SVM)

  1. High-Dimensional Performance: SVM excels in high-dimensional spaces, making it suitable for image classification and gene expression analysis.
  2. Nonlinear Capability: Utilizing kernel functions like RBF and polynomial SVM effectively handles nonlinear relationships.
  3. Outlier Resilience: The soft margin feature allows SVM to ignore outliers, enhancing robustness in spam detection and anomaly detection.
  4. Binary and Multiclass Support: SVM is effective for both binary classification and multiclass classification suitable for applications in text classification.
  5. Memory Efficiency: It focuses on support vectors making it memory efficient compared to other algorithms.

Disadvantages of Support Vector Machine (SVM)

  1. Slow Training: SVM can be slow for large datasets, affecting performance in SVM in data mining tasks.
  2. Parameter Tuning Difficulty: Selecting the right kernel and adjusting parameters like C requires careful tuning, impacting SVM algorithms.
  3. Noise Sensitivity: SVM struggles with noisy datasets and overlapping classes, limiting effectiveness in real-world scenarios.
  4. Limited Interpretability: The complexity of the hyperplane in higher dimensions makes SVM less interpretable than other models.
  5. Feature Scaling Sensitivity: Proper feature scaling is essential, otherwise SVM models may perform poorly.

kamblenayan826

Leave a Reply

Your email address will not be published. Required fields are marked *