Sr. Data Engineer -- Databricks
Fully Remote Sacramento, CA
Job Type

We are 11:59, a professional services firm that helps forward-thinking public and commercial sector clients unlock the full potential of their transformation and modernization efforts. With a team led by former Big 4 Consulting Executives, we bring the Art of the Possible to life, delivering unbiased strategies that are independent from resource constraints, while generating true value and advantage based on what is possible today so that our clients are ready for tomorrow.

As a firm that’s built on the curiosity of our experts, we employ an integrated, agile talent model to deliver modern technology. Our clients hire us for our shared commitment to their mission, the collaborative approach of our team members, and our track record of delivering breakthrough solutions and results. 

At 11:59, we don’t just showcase the art of the possible, we deliver it.

Job Description:

As a Databricks data engineer, your main role is to design, develop, and manage the data infrastructure on the Databricks platform within an AWS / Azure cloud environment. This involves tasks like configuring the data lake (ADLS, S3), creating and optimizing data pipelines, and closely monitoring them to ensure data quality and scalability.

Your responsibilities also extend to integrating data from different sources, conducting data transformations, configuring security data sharing, and ensuring data cleanliness. To achieve success, effective collaboration with various internal and client teams, including product owners and developers, is essential. Understanding their data requirements and providing appropriate solutions will be an integral part of your work. By doing so, you'll contribute significantly to our client’s digital transformation initiatives and facilitate data-driven decision-making while advancing their AI/ML journey.

Job Responsibilities:

  • Create technical, functional, and operational documentation for data pipelines and applications
  • Work effectively in an Agile Scrum environment (JIRA / Azure DevOps)
  • Use business requirements to drive the design of data solutions/applications and technical architecture
  • Work with other developers, designers, and architects (local and remote) to ensure data applications meet requirement and performance, data security, and analytics goals
  • Anticipate, identify, track, and resolve issues and risks affecting delivery
  • Configure, build, and test applications and technical architecture
  • Fix any defects and performance problems discovered in testing
  • Coordinate and participate in structured peer reviews / walkthroughs / code reviews.
  • Provide application/technical support
  • Maintain and/or update technical and/or industry knowledge and skills through continuous learning activities

Required Qualifications:

  • B.S. in Computer Science/Engineering or relevant field
  • 8+ years of experience in the IT industry
  • 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS / Azure cloud infrastructure and functions
  • Expert understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized) implementing highly performant data ingestion pipelines from multiple sources
  • Expert level skills with Python / PySpark and SQL
  • Experience working within a Data Engineering framework to include package/dependency management tools (e.g., Poetry), functional testing (e.g., Pytest, Pytest-Cov, PyLint)
  • Experience with CI/CD on Databricks using tools such as Jenkins, GitHub Actions, and Databricks CLI
  • Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained
  • Strong understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing)
  • Evaluating the performance and applicability of multiple tools against customer requirements
  • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints
  • Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT)
  • Hands on experience developing batch and streaming data pipelines
  • Able to work independently
  • Excellent oral and written communication skills
  • Nice to have: experience in Power BI/Tableau/QuickSight
  • Nice to have: experience with AWS Redshift, Snowflake, Azure Synapse

How you’ll grow:

At 11:59, our professional development plan is dedicated to supporting individuals at all stages of their careers in recognizing and utilizing their strengths to achieve optimal performance every day. Whether you are an entry-level employee or a senior leader, we strongly believe in the power of continuous learning and provide a range of opportunities to enhance skills and gain practical experience in the dynamic and rapidly evolving global technical landscape.

Our approach includes both on-the-job learning experiences and formal development programs, ensuring that our professionals have ample opportunities for growth throughout their entire career journey. We are committed to fostering a culture of continuous improvement, where every individual can thrive, reach their fullest potential, and deliver the art of the possible.

Why work with us?

  • Competitive pay
  • Health, dental, vision, and life insurance
  • Unlimited Paid Time Off
  • 401(k) matching
  • Remote
Salary Description
$90,000 year to $120,000