- M.A. YA'A1, Mohammed Idris2 & Dr. GT Obadiah3
- DOI: 10.5281/zenodo.17080669
- SSR Journal of Artificial Intelligence (SSRJAI)
The AI (Artificial intelligence) and the ML (machine learning) are growing up day by day. Today there are a lot of systems that uses these types of technologies to do tasks of every nature, from medical to military, from agriculture to industries. Also, robotics uses ML to train the machines. But if on one side the AI systems are growing up to do “good tasks” often they are trained to do also “bad tasks” that can influence the concept of security not only digital but also physical and political. This article would summarize and explain the security of the AI systems mainly referencing to the results written in the report: “The malicious use of Artificial intelligence. The rapid integration of Artificial Intelligence (AI) and Machine Learning (ML) systems into critical sectors such as finance, healthcare, defense, and governance has amplified both opportunities and risks. While AI-driven ML models provide adaptive solutions, enhanced decision-making, and predictive accuracy, they also introduce novel security vulnerabilities that traditional systems were not designed to handle. This study examines the security implications of Artificial Intelligence in Machine Learning systems, highlighting how adversarial attacks, data poisoning, model inversion, and algorithmic manipulation can compromise trust, confidentiality, and integrity. It explores the dual-use dilemma, where the same AI algorithms designed to secure systems can be exploited by malicious actors to launch sophisticated cyberattacks. Additionally, the research addresses the ethical and policy challenges of deploying AI-driven ML in sensitive domains, emphasizing the risks of bias, opacity, and lack of accountability in automated decision-making. The work draws attention to the importance of robust security frameworks, adversarial resilience, explainable AI (XAI), and regulatory oversight as critical strategies for safeguarding machine learning ecosystems. By integrating perspectives from cybersecurity, data science, and policy, this paper contributes to an interdisciplinary understanding of AI security challenges and their implications for global digital infrastructure. The findings underscore that ensuring the security of AI-enabled machine learning systems is not just a technical necessity but a societal imperative to maintain trust, safety, and stability in an increasingly automated world.