The Amur Data team maximizes the value of data by unlocking its power in consistent and simplified ways to create insights used to solve some of our most complex business problems. The team provides platform modernization, data delivery, and operations support to Analytics teams aligned to Sales, Originations and Operations initiatives.
We are seeking a highly skilled Data Engineer to join our innovative, agile, and fast-paced team. As a Data Engineer, you will utilize emerging technologies and play an integral role in helping business partners across the enterprise make data-driven decisions to enable Amur’s continued success. The Data Engineer will collaborate with business leaders, support teams, and other engineers/developers to modernize workflows, automate routine operations, and solve complex business requirements.
The ideal candidate will have a minimum of 5 years’ experience delivering creative automation, methodology, and infrastructure solutions to on-prem and cloud-based data customers.
As a Data Engineer your responsibilities will include:
Performs a combination of duties in accordance with departmental guidelines:
Applying in-depth knowledge of data engineering practices, including Extract, Transform, Load (ETL); data integration; data pipelines; data management; and data storage functionality to design and build reusable data solutions.
· Expert knowledge of relational database concepts, ETL/ELT, star/snowflake schema, and data modeling.
- Expert knowledge of data integration design and development, ensuring accuracy and ease of consumption. Strong troubleshooting and problem-solving skills.
- Proficient in advanced SQL and Python in a business environment with large-scale, complex datasets.
- Utilizing data and application architecture to increase efficiency and effectiveness of solutions aimed at solving complex business problems.
- Excellent communication and interpersonal skills and the ability to work effectively with peers and team members.
- Strong experience in building data analytics solutions like Data warehouse / Data Lake / Data Lakehouse either OnPrem or using Cloud providers like Azure, AWS, or using Databricks.
· Collaborate with Data Product Managers, Data Architects, and other developers to design, implement, and deliver successful data solutions.
· Preferred experience with the financial services industry, its products, and services.
- Strong experience in implementing big data processing technology. Apache Spark is preferred.
- Working knowledge of Business Intelligence tools. Tableau and SSRS Preferred
Preferred / Required Qualification:
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Has demonstrated proficiency in designing and developing data marts in Snowflake schema.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL Server, NoSQL, Kafka using AWS or AZURE Big Data technologies.
- Use troubleshooting skills to identify and correct root cause of workflow failures based on error log outputs and environmental conditions.
- Use SQL to examine, filter, and aggregate data in Microsoft SQL Server.
- Experience working with data transformation processing.
- Anticipate, identify, and solve issues concerning data management to improve data quality.
- Experience working with Microsoft BI and Microsoft SQL server.
- Perform POCs on new technology, architecture patterns.
- Must have Experience with at least one Columnar MPP Cloud data warehouse (Snowflake /Azure Synapse / Redshift)
- Design of complex physical data models, projects and cloud-based data lake constructs including SQL/NoSQL database systems. Leads the creation of integrated data views based on business or analytics requirements.
- Design, implement, and automate data pipelines sourcing data from internal and external systems, transforming the data for the optimal needs of various systems and business requirements.
- Experience in ETL tools like DBT is nice to have.
- Experience with version control and DevOps platforms such as AZURE DevOps, GitHub, GitLab
- Experience with CI/CD Pipelines and SDLC best practices.
- Experience using Agile methods and project management tools like Jira preferred.
Education
- Bachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline.
- Master’s degree (Not Required, Nice to Have)
- Applicable certifications preferred (AZURE, AWS, Data Engineering).