Building ML platforms that turn ideas into production-grade products

I'm an MLOps & Platform Engineer who builds and operates cloud-based ML systems—from data ingestion and feature pipelines to automated training, deployment, and monitoring. I focus on making models reliable, scalable, and ready for real-world decision-making.

MLOpsPlatform EngineeringCloud InfrastructureAI/ML SystemsDevOps
Scroll to explore
01

What I Build

End-to-End ML Platforms

Building complete ML infrastructure: data ingestion, feature stores, automated training pipelines, model registries, and production deployment—systems that process hundreds of millions of rows daily.

AI/ML Engineering

From AutoML pipelines and time-series forecasting to LLM integrations and knowledge graphs. Building intelligent systems that solve real problems—with a focus on operational reliability, not just model accuracy.

CI/CD & Automation

Designing GitOps workflows, optimizing build pipelines, and creating "golden paths" that let teams ship ML faster and more safely—without sacrificing reliability or auditability.

Database Infrastructure

Operating large-scale database systems: migrations, schema management, backup strategies, and disaster recovery. Treating databases as high-risk assets where reliability directly impacts business outcomes.

02

How I Think

I approach ML systems with questions that bridge technical and business domains: Who uses this platform? What operational risk does it reduce? How does it scale when teams change?

My background in Management Information Systems gave me fluency in business models and organizational decision-making. Rather than pursuing a commercial path, I invested in technical depth—with the goal of understanding why systems are built, not just how.

This dual perspective shapes everything I build. I naturally think about second-order consequences: what happens when ownership changes, when priorities shift, when the model that worked yesterday starts drifting.

Observability First

Debuggability is a design constraint, not an afterthought. If you can't understand why a model failed in production, you haven't finished building it.

Design for Failure

Happy paths are easy. Resilience comes from deeply understanding failure modes and building systems that degrade gracefully.

Question Assumptions

Continuously challenge data quality, model behavior, and operational trust. The best MLOps is skeptical MLOps.

Sensible Defaults

Opinionated platforms with clear escape hatches beat configuration sprawl. Make the right thing easy and the wrong thing hard.

03

About

I'm a business-aware engineer who deliberately chose a deep technical path in MLOps and platform engineering.

I studied Management & Technology at TU Munich, with a thesis examining temporal effects of gender bias in transformer models—work that sharpened my ability to reason about second-order consequences of technical decisions.

Before that, I studied Management Information Systems at Boğaziçi University (Turkey's #1), which gave me a foundation in how organizations make decisions and create value.

Outside of engineering, I'm drawn to photography—finding parallels between capturing decisive moments and designing systems that behave predictably under pressure. Currently based in Berlin.

Focus

  • AI Platform Engineering
  • Production ML Systems
  • Data Science
  • DevOps
  • Database Infrastructure

Interests

  • Systems Thinking
  • AI Ethics & Bias
  • Photography
04

Writing

I help teams ship ML faster and more safely—while continuously questioning assumptions to improve model quality and operational trust.

05

Get in Touch

I'm always interested in discussing MLOps challenges, infrastructure strategy, or how to build ML systems that teams can actually trust. Feel free to reach out.

imamogluubilal@gmail.com

Consulting: I'm selectively available for ML infrastructure reviews, platform strategy consulting, and technical advisory engagements. If you're building production ML systems and need an operational perspective, let's talk.