Skip to content

zenml-io/zenml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

ZenML Header

Your unified toolkit for shipping everything from decision trees to complex AI agents.

PyPi PyPi PyPi Contributors License

Projects β€’ Roadmap β€’ Report Bug β€’ Sign up for ZenML Pro β€’ Blog β€’

πŸŽ‰ For the latest release, see the release notes.


ZenML is built for ML or AI Engineers working on traditional ML use-cases, LLM workflows, or agents, in a company setting.

At it's core, ZenML allows you to write workflows (pipelines) that run on any infrastructure backend (stacks). You can embed any Pythonic logic within these pipelines, like training a model, or running an agentic loop. ZenML then operationalizes your application by:

  1. Automatically containerizing and tracking your code.
  2. Tracking individual runs with metrics, logs, and metadata.
  3. Abstracting away infrastructure complexity.
  4. Integrating your existing tools and infrastructure e.g. MLflow, Langgraph, Langfuse, Sagemaker, GCP Vertex, etc.
  5. Allowing you to quickly iterate on experiments with an observable layer, in development and in production.

...amongst many other features.

ZenML is used by thousands of companies to run their AI workflows. Here are some featured ones:

AirbusΒ Β Β Β  AXAΒ Β Β Β  JetBrainsΒ Β Β Β  RivianΒ Β Β Β  WiseTech GlobalΒ Β Β Β  Brevo
Leroy MerlinΒ Β Β Β  KobleΒ Β Β Β  PlaytikaΒ Β Β Β  NIQΒ Β Β Β  Enel

(please email support@zenml.io if you want to be featured)

πŸš€ Get Started (5 minutes)

πŸ—οΈ Architecture Overview

ZenML uses a client-server architecture with an integrated web dashboard (zenml-io/zenml-dashboard):

  • Local Development: pip install "zenml[server]" - runs both client and server locally
  • Production: Deploy server separately, connect with pip install zenml + zenml login <server-url>
# Install ZenML with server capabilities
pip install "zenml[server]"

# Install required dependencies
pip install scikit-learn openai numpy

# Initialize your ZenML repository
zenml init

# Start local server or connect to a remote one
zenml login

Here is a brief demo:

zenml_demo_comp.mp4

πŸ—£οΈ Chat With Your Pipelines: ZenML MCP Server

Stop clicking through dashboards to understand your ML workflows. The ZenML MCP Server lets you query your pipelines, analyze runs, and trigger deployments using natural language through Claude Desktop, Cursor, or any MCP-compatible client.

πŸ’¬ "Which pipeline runs failed this week and why?"
πŸ“Š "Show me accuracy metrics for all my customer churn models"  
πŸš€ "Trigger the latest fraud detection pipeline with production data"

Quick Setup:

  1. Download the .dxt file from zenml-io/mcp-zenml
  2. Drag it into Claude Desktop settings
  3. Add your ZenML server URL and API key
  4. Start chatting with your ML infrastructure

The MCP (Model Context Protocol) integration transforms your ZenML metadata into conversational insights, making pipeline debugging and analysis as easy as asking a question. Perfect for teams who want to democratize access to ML operations without requiring dashboard expertise.

πŸ“š Learn More

πŸ–ΌοΈ Getting Started Resources

The best way to learn about ZenML is through our comprehensive documentation and tutorials:

πŸ“– Production Examples

  1. Agent Architecture Comparison - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
  2. Minimal Agent Production - Document analysis service with pipelines, evaluation, and web UI
  3. E2E Batch Inference - Complete MLOps pipeline with feature engineering
  4. LLM RAG Pipeline - Production RAG with evaluation loops
  5. Agentic Workflow (Deep Research) - Orchestrate your agents with ZenML
  6. Fine-tuning Pipeline - Fine-tune and deploy LLMs

πŸŽ“ Books & Resources

ZenML is featured in these comprehensive guides to production AI systems.

🀝 Join ML Engineers Building the Future of AI

Contribute:

Stay Updated:

  • πŸ—Ί Public Roadmap - See what's coming next
  • πŸ“° Blog - Best practices and case studies
  • πŸŽ™ Slack - Talk with AI practitioners

❓ FAQs from ML Engineers Like You

Q: "Do I need to rewrite my agents or models to use ZenML?"

A: No. Wrap your existing code in a @step. Keep using scikit-learn, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.

Q: "How is this different from LangSmith/Langfuse?"

A: They provide excellent observability for LLM applications. We orchestrate the full MLOps lifecycle for your entire AI stack. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.

Q: "Can I use my existing MLflow/W&B setup?"

A: Yes! ZenML integrates with both MLflow and Weights & Biases. Your experiments, our pipelines.

Q: "Is this just MLflow with extra steps?"

A: No. MLflow tracks experiments. We orchestrate the entire development process – from training and evaluation to deployment and monitoring – for both models and agents.

Q: "How do I configure ZenML with Kubernetes?"

A: ZenML integrates with Kubernetes through the native Kubernetes orchestrator, Kubeflow, and other K8s-based orchestrators. See our Kubernetes orchestrator guide and Kubeflow guide, plus deployment documentation.

Q: "What about cost? I can't afford another platform."

A: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.

πŸ›  VS Code / Cursor Extension

Manage pipelines directly from your editor:

πŸ–₯️ VS Code Extension in Action!
ZenML Extension

Install from VS Code Marketplace.

πŸ“œ License

ZenML is distributed under the terms of the Apache License Version 2.0. See LICENSE for details.