Machine Learning Engineer
Description

  

MACHINE LEARNING ENGINEER (MLOPS / DATA ENGINEERING)


Overview

Darwill is a nationally recognized print and marketing communications firm based in the west suburbs of Chicago. As a premier provider of complex, data-driven marketing solutions, we help CMOs and marketing leaders drive measurable performance through advanced analytics, automation, and AI-powered insights.


We are seeking a Machine Learning Engineer (MLOps) to support the productionization of traditional machine learning models (e.g., propensity and segmentation models) while also building and maintaining the core data pipelines on Databricks that power our analytics and modeling platforms.


This role is intentionally scoped for a mid-level engineer: someone with enough experience to work independently and make sound engineering decisions, but who is still hands-on, execution-focused, and eager to grow. This is not an entry-level position, and it is not a principal or architect-level role..

  

Location

Chicago, IL area (Oak Brook / West Suburbs)
Hybrid work model with 1–2 days onsite per week required

  

Reports To

VP of Data Engineering & Data Science

  

Responsibilities / Essential Functions

Data Engineering & Platform Foundations

  • Design, build, and maintain ETL pipelines in Databricks using Spark and Delta Lake
  • Independently implement data transformations, joins, and aggregations across large, multi-source datasets
  • Build and maintain data validation and quality checks to ensure reliability of  downstream analytics and ML workflows
  • Optimize Databricks jobs for performance, scalability, and cost efficiency
  • Write and maintain clear technical documentation for data pipelines and tables

ML Engineering & MLOps

  • Partner closely with Data Scientists to support traditional ML model development, including feature engineering, training, validation, and deployment
  • Productionize propensity, ranking, and segmentation models used in large-scale marketing  campaigns
  • Build and maintain repeatable ML pipelines for training, batch scoring, and inference
  • Implement model versioning, experiment tracking, and reproducibility standards
  • Support  model performance monitoring, drift detection, and retraining cycles

Deployment, Monitoring & Operations

  • Deploy data pipelines and ML workflows into production environments serving      millions of records
  • Implement monitoring and alerting for data and ML pipelines
  • Support  A/B testing and model performance evaluation in partnership with Data      Science
  • Troubleshoot production issues independently and collaborate effectively when      escalation is needed

GenAI (Secondary / Directional)

  • Contribute to GenAI initiatives as capacity allows
  • Stay informed on emerging AI technologies and tooling
    (GenAI is not the primary focus of this role today.)

  

Required Qualifications

Experience

  • 3–6  years of professional experience in machine learning engineering, data      engineering, or a closely related role
  • Experience  working in production environments with minimal day-to-day supervision
  • Demonstrated ability to collaborate effectively with Data Scientists and translate      models into production systems

Technical Skills (Must-Have)

Data Engineering & Platform

  • Apache  Spark (PySpark, SparkSQL)
  • Databricks  (ETL pipelines, workflows, Delta Lake)
  • Strong SQL skills (complex queries, joins, window functions, optimization)
  • Experience building and maintaining scalable data pipelines

Programming & Machine Learning

  • Python  (pandas, numpy, scikit-learn; experience with XGBoost or LightGBM preferred)
  • Feature engineering and data preparation for ML models
  • Working knowledge of supervised learning models (classification, regression, ranking)

MLOps & Production

  • Experience  deploying ML models into production
  • Model versioning and experiment tracking (e.g., MLflow or similar)
  • Monitoring data quality and model performance in production
  • Supporting retraining and validation workflows

Cloud & Tooling

  • Experience  with a major cloud platform (Databrick, AWS)
  • Familiarity with workflow orchestration tools (Databricks Workflows or similar)

  

Preferred Qualifications (Nice-to-Have)

  • Experience with propensity modeling, customer segmentation, or marketing analytics
  • Exposure to CI/CD concepts for data and ML pipelines
  • Experience  with Docker or containerized deployments
  • Exposure to GenAI, LLMs, or RAG-based systems
  • Master’s degree in Computer Science, Statistics, or a related field