SageMaker MLOps Pipeline - Dev & Prod

aws · network diagram.

About This Architecture

Production-grade MLOps architecture separates dev and prod SageMaker endpoints behind Application Load Balancer with WAF protection, both running ml.g5.2xlarge instances with auto-scaling. Training Pipeline feeds SageMaker Training Job which stores artifacts in S3, while Step Functions orchestrates retraining triggered by EventBridge and monitored via SageMaker Model Monitor. Feature Store centralizes feature management across environments, with CloudWatch Monitoring tracking endpoint performance and Lambda functions processing data and orchestrating workflows. This architecture demonstrates AWS best practices for continuous model deployment, automated retraining, and environment isolation critical for regulated ML workloads. Fork this diagram on Diagrams.so to customize instance types, add CI/CD stages, or integrate with your model registry and monitoring stack.

People also ask

How do I architect a production MLOps pipeline on AWS SageMaker with automated retraining and environment separation?

Deploy separate dev and prod SageMaker endpoints behind ALB with WAF, use Feature Store for centralized features, orchestrate retraining with Step Functions triggered by EventBridge, and monitor with SageMaker Model Monitor and CloudWatch. This diagram shows the complete architecture with auto-scaling, model registry, and Lambda orchestration.

SageMaker MLOps Pipeline - Dev & Prod

AWSadvancedSageMakerMLOpsMachine LearningAuto ScalingStep Functions
Domain: Ml PipelineAudience: ML engineers and MLOps practitioners deploying production SageMaker inference pipelines
0 views0 favoritesPublic

Created by

February 20, 2026

Updated

February 20, 2026 at 8:57 AM

Type

network

Need a custom architecture diagram?

Describe your architecture in plain English and get a production-ready Draw.io diagram in seconds. Works for AWS, Azure, GCP, Kubernetes, and more.

Generate with AI