The advent of MLOps provides a structure for automating machine learning workflows, leading to increased efficiency and consistency. However, the computational demands of large-scale model training and deployment can result in significant infrastructure costs. On the other hand, existing solutions are designed to the large models for machine learning, and data scientists and engineers are clearly unconscious of the difference of real cost of training and deploying small models compared with the solution architect of large models machine learning. Which is more reasonable and make less expense? This dissertation investigates techniques to optimize these costs by leveraging main cloud service and attempts to provide a viable approach to reduce these costs.