Improving Classification in Support Vector Machine Using Neutrosophic Logic
Keywords:
Support Vector Machine; Kernel Trick; Neutrosophic; Indeterminacy; Classification.Abstract
Support Vector Machine (SVM) is considered one of the most effective methods for
classification tasks. However, randomness in classifying observations on the optimal separating
hyperplane decision — which are often indeterminate in their class membership — and
misclassified observations remain among the main challenges that researchers face when using
SVM. To address these challenges, a Neutrosophic logic – based approach was proposed, enabling
better handling of class ambiguity and reducing misclassifications in the neighborhood of the
optimal separating hyperplane. A novel algorithm was introduced, which consisted of two main
steps. First, the observations located near the optimal separating hyperplane were converted into
Neutrosophic data, characterized by three components: truth, indeterminacy, and falsity. In the
second step, the SVM classifier was reapplied to the Neutrosophic data to improve the classification
accuracy. As a case study, the proposed algorithm was tested on two types of oil samples (sunflower
oil and corn oil), and its performance was compared with that of a standard SVM without
Neutrosophic data. The results demonstrated that the SVM models utilizing Neutrosophic data
achieved higher accuracy compared to those without Neutrosophic data. Therefore, integrating
Neutrosophic logic with SVM can significantly enhance the performance and reliability of SVM
based models.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Neutrosophic Sets and Systems

This work is licensed under a Creative Commons Attribution 4.0 International License.

