Senior Data Engineer
Fully Remote | within USA
Description

THE COMPANY 


Archer Education partners with colleges and universities across the US to help them reach further, recruit smarter, and grow faster through targeted digital enrollment marketing solutions. If you’re looking for a meaningful career, where you’ll have the freedom, flexibility and support to thrive – all while having fun and working with some pretty awesome people, then Archer Education might be right for you. Archer offers a competitive salary, excellent benefits, referral bonuses, tuition reimbursement, and a fun, casual work environment. 


Since 2007, Archer has developed a reputation as innovative, dependable and always willing to go above and beyond for our partners. As we grow, it is more important than ever that we focus on these areas, and doing so requires amazing people who also are innovative, dependable and willing to go above and beyond. We are a culture that values experience and knowledge, and we need someone who can make an immediate impact and help us continue delivering a high-quality experience that exceeds expectations.


THE ROLE 


The Senior Data Engineer will play a key role in modernizing our data infrastructure. While reporting directly to the CTO, they will have a dotted line to all executive leadership as they will be instrumental in the development and optimization of a centralized data lake that will support data-driven decision-making across the company. This role offers the opportunity to transition our existing data architecture, which currently consists of a combination of legacy data warehouses and manual processes, into a streamlined and efficient data ecosystem.


THE RESPONSIBILITIES

  • Architect and oversee a unified data lake infrastructure to  consolidate and standardize data from diverse origins.
  • Construct and refine Extract, Transform, Load/Extract, Load, transform pipelines for data ingestion from various repositories, including web application databases, client files, marketing platforms, and legacy metadata files.
  • Engineer and deploy artificial intelligence-based solutions to  automate data analytics, improve predictive accuracy, and streamline  decision-making procedures.
  • Update existing reporting systems and Extract, Transform, Load processes to increase uniformity, operational effectiveness, and system dependability.
  • Establish self-service analytical functionalities that enable users without technical backgrounds to retrieve and analyze data.
  • Engage in cross-departmental cooperation with technical staff, analysts, and other pertinent business partners.
  • Institute data verification protocols and oversight to preserve  data integrity.
  • Produce documentation for data trajectories, models, and  operational methodologies.

THE REQUIREMENTS & SKILLS

  • Extensive experience in designing, implementing, and maintaining ETL (Extract,  Transform, Load) and ELT (Extract, Load, Transform) pipelines to move and transform data efficiently and reliably between various sources and destinations.
  • Strong grasp of data modeling principles and data warehousing concepts, including dimensional modeling, star schemas, and snowflake schemas, to design and implement effective data storage and retrieval solutions.
  • Hands-on experience with at least one modern data pipeline/workflow orchestration tool (e.g., Apache Airflow, Dagster, Prefect) to automate and manage complex data workflows, ensuring data quality and consistency.
  • Demonstrated ability to translate business requirements into technical specifications, bridging the gap between business needs and technical implementation.
  • Excellent communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical stakeholders in a clear and concise manner, fostering collaboration and understanding across teams.
  • Familiarity with cloud computing platforms, specifically Amazon Web Services (AWS), and its suite of data services (e.g., S3, Redshift, EMR) to leverage cloud-based solutions for data storage, processing, and analytics.
  • Proficiency in SQL (Structured Query Language) for querying and manipulating data in relational databases, a core skill for      data engineers. 
  • Bachelor’s degree in relevant field 
  • 4+ years of relevant experience 

PREFERRED QUALIFICATIONS 

  • Experience with AWS data services, including Redshift, RDS, Lambda, Glue, and Athena
  • Familiarity with Google BigQuery and Looker Studio
  • Knowledge of data lake architectures and best practices

WORKING CONDITIONS 


This is a fully remote, individual contributor role with high visibility across the organization and the opportunity to significantly impact our company's data capabilities.


THE BENEFITS


At Archer, we believe empowered employees have the most fun and do their best work, so we strive to offer a competitive benefits package to allow just that. Our comprehensive benefits package includes medical, dental and vision plans, as well as paid time off and sick days. Here are just a few more of the benefits you’ll enjoy as an Archer team member.

  • Invest in your future: 401k plans and a full contribution match up to 4%.
  • Prioritize your family: Up to 12 weeks of parental leave with 6 weeks paid, following the birth, adoption, or foster care placement of a child.
  • Never stop learning: Receive up to $5,250 per year in tuition reimbursement for continuing education.
  • Balance work and life: We offer a flexible working environment and schedule that empowers you to do your best work. Our teams work across all time zones in the US. 


Visit archeredu.com to learn more.