Design and Optimization of End-To-End Artificial Intelligence and Machine Learning Pipelines in CI/CD-Enabled Cloud Infrastructures

Authors

  • Mohammad Robel Miah Masters of Science in Computer Science; Prairie View A & M University, TX, USA Author

DOI:

https://doi.org/10.63125/sdvder84

Keywords:

AI/ML Pipelines, CI/CD, Cloud Optimization, Mops, Distributed Training

Abstract

This quantitative study examined the design and optimization of end-to-end artificial intelligence and machine learning (AI/ML) pipelines deployed in CI/CD-enabled cloud infrastructures by evaluating measurable relationships between pipeline engineering maturity and operational outcomes. A total of 214 valid responses from AI/ML, DevOps/Mops, and cloud engineering professionals were analyzed across technology, finance, healthcare, manufacturing, and logistics sectors. Descriptive results showed high maturity in component separation (M = 4.12, SD = 0.63), schema validation (M = 4.18, SD = 0.54), and automated test coverage (M = 4.05, SD = 0.58), while greater variability was observed in dependency complexity (M = 3.41, SD = 0.72) and drift detection (M = 3.78, SD = 0.75). Operational indicators revealed an average training runtime of 142.6 minutes (SD = 38.4), GPU utilization of 81.7% (SD = 6.8), scaling efficiency of 78.3% (SD = 9.5), and inference latency of 84.2 MS (SD = 15.6). Reliability analysis confirmed strong internal consistency across constructs, with Cronbach’s alpha values ranging from 0.83 to 0.91. Regression results indicated that modularity, automation depth, and validation effectiveness significantly reduced pipeline runtime (R² = 0.528), while monitoring readiness, test coverage, and gate pass rate significantly reduced change failure rate (R² = 0.496). Autoscaling responsiveness, caching maturity, and serving optimization significantly improved tail latency stability (R² = 0.551). Cost models showed scaling efficiency and GPU utilization significantly reduced training cost per run (R² = 0.467), while caching maturity reduced inference cost per 1,000 predictions (R² = 0.419). Overall, the findings demonstrated that measurable pipeline optimization outcomes were strongly associated with architectural modularity, CI/CD automation depth, validation rigor, and observability maturity in cloud-native AI/ML delivery systems.

Downloads

Published

2026-02-15

How to Cite

Mohammad Robel Miah. (2026). Design and Optimization of End-To-End Artificial Intelligence and Machine Learning Pipelines in CI/CD-Enabled Cloud Infrastructures. International Journal of Scientific Interdisciplinary Research, 7(1), 234–279. https://doi.org/10.63125/sdvder84

Cited By: