Jr. Data Engineer - Databricks
Fully Remote Sacramento, CA
Job Type

About 11:59 

11:59 is a business and technology consulting firm focused on delivering mission-critical work for private and public sector organizations.? 

Led by former Big 4 consulting executives, we have a deep bench of technology experts whose sole purpose is to help take clients’ business and digital transformation objectives from strategy through execution. For nearly 20 years, we have guided forward-thinking clients to discover their full potential and have delivered hundreds of projects and billions of dollars’ worth of project value to our clients.? 

Our culture and our values—including curiosity, collaboration, integrity, commitment, and respect—are core to who we are at 11:59. We take these ideals very seriously, and they guide us in our purpose to help our customers reach their full potential and focus on their mission-driven work. Any touchpoint between a prospect or client is further guided by our client experience model, and we care deeply about curating an elevated, outstanding experience. 

Job Description: 

As a Databricks data engineer, your main role is to design, develop, and manage the data infrastructure on the Databricks platform within an AWS / Azure cloud environment. This involves tasks like configuring the data lake (ADLS, S3), creating and optimizing data pipelines, and closely monitoring them to ensure data quality and scalability. 

Your responsibilities also extend to integrating data from different sources, conducting data transformations, configuring security data sharing, and ensuring data cleanliness. To achieve success, effective collaboration with various internal and client teams, including product owners and developers, is essential. Understanding their data requirements and providing appropriate solutions will be an integral part of your work. By doing so, you'll contribute significantly to our client’s digital transformation initiatives and facilitate data-driven decision-making while advancing their AI/ML journey. 

Job Responsibilities 

  • Create technical, functional, and operational documentation for data pipelines and applications. 
  • Work effectively in an Agile Scrum environment (JIRA / Azure DevOps) 
  • Use business requirements to drive the design of data solutions/applications and technical architecture. 
  • Work with other developers, designers, and architects (local and remote) to ensure data applications meet requirement and performance, data security, and analytics goals. 
  • Anticipate, identify, track, and resolve issues and risks affecting delivery. 
  • Configure, build, and test applications and technical architecture. 
  • Fix any defects and performance problems discovered in testing. 
  • Coordinate and participate in structured peer reviews / walkthroughs / code reviews. 
  • Provide application/technical support. 
  • Maintain and/or update technical and/or industry knowledge and skills through continuous learning activities. 

Required Qualifications: 

  • B.S. in Computer Science/Engineering or relevant work experience 
  • 3+ years of experience in the IT industry 
  • 1+ years of hands-on experience in data engineering/ETL using Databricks on AWS / Azure cloud infrastructure and functions. 
  • Understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized) implementing highly performant data ingestion pipelines from multiple sources 
  • Skills with Python / PySpark and SQL 
  • Experience working within a Data Engineering framework to include package/dependency management tools (e.g., Poetry), functional testing (e.g., Pytest, Pytest-Cov, PyLint) 
  • Experience with CI/CD on Databricks using tools such as Jenkins, GitHub Actions, and Databricks CLI 
  • Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. 
  • Strong understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing) 
  • Evaluating the performance and applicability of multiple tools against customer requirements 
  • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. 
  • Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT). 
  • Hands on experience developing batch and streaming data pipelines. 
  • Able to Work Independently 
  • Excellent oral and written communication skills 
  • Nice to have: experience in Power BI/Tableau/QuickSight 
  • Nice to have: experience with AWS Redshift, Snowflake, Azure Synapse 

How you’ll grow: 

At 11:59, our professional development plan is dedicated to supporting individuals at all stages of their careers in recognizing and utilizing their strengths to achieve optimal performance every day. Whether you are an entry-level employee or a senior leader, we strongly believe in the power of continuous learning and provide a range of opportunities to enhance skills and gain practical experience in the dynamic and rapidly evolving global technical landscape. 

Our approach includes both on-the-job learning experiences and formal development programs, ensuring that our professionals have ample opportunities for growth throughout their entire career journey. We are committed to fostering a culture of continuous improvement, where every individual can thrive, reach their fullest potential, and deliver the art of the possible. 

Why work with us? 

  • Competitive pay 
  • Health, dental, vision, and life insurance 
  • Unlimited Paid Time Off 
  • 401(k) matching 
  • Laptop 
  • Remote 

Applicants must be able to work in the United States without the need for current or future visa sponsorship. No Corp to Corp. 

Salary Description