Multi-criteria neurological method to evaluate the discriminatory discriminatory algorithms of Artificial Intelligence
Main Article Content
Abstract
This research examines how artificial intelligence (AI) impacts fundamental rights, focusing on algorithmic discrimination. The objective is to implement a multi-criteria neutrosophic method to evaluate discriminatory AI algorithms, as well as to assess the main AI systems and their effects on equality and non-discrimination. Throughout the analysis, biases were identified in the training data that can lead to discriminatory decisions; a clear example is the Canva platform, which associates poverty with indigenous people. This research underlines the importance of transparency and accountability in AI algorithms, proposing that the use of the neutrosophic method will not only help to identify and evaluate biases, but will also facilitate the development of recommendations to mitigate discrimination and ensure the protection of fundamental rights. Thus, it seeks to contribute to the creation of fairer and more equitable AI systems, promoting equality and non-discrimination in their implementation.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.