Our client, a global leader within FSI, is looking for a highly skilled and results-driven individual to join their team as a Senior Data Engineer. This senior technical role will be responsible for designing, building, deploying, and supporting data integration solutions for business intelligence reporting, analytics, and application integration. The ideal candidate will work with Python, AWS Glue, and other supporting tools and services. The team is involved in exciting projects that leverage cutting-edge technologies and our cloud-based data platform to drive advanced analytics and data science.
Important: As a requirement for the role, the successful candidate must be able to obtain a Government of Canada Reliability Status security clearance. Candidates must be eligible for Reliability Status Clearance.
Key Responsibilities:
- Design, code, and guide the development of efficient data applications and reusable frameworks.
- Develop data pipelines in a cloud environment using Python and AWS Glue (PySpark).
- Coordinate and participate in the full development cycle, from design and development to release planning and system implementation.
- Mentor and guide junior data engineers across multiple locations to ensure code quality, efficiency, and maintainability.
- Translate requirements into detailed functional and technical designs using approved technologies.
- Provide high-level solution options and project proposals, along with detailed work estimates.
- Deliver solutions using Systems Development Life Cycle (SDLC) methodology, both for waterfall and agile projects.
- Provide consultation for evaluating data and software systems.
- Build and manage strong working relationships with other departments, teams, or personnel.
Required Skills and Experience:
- 5 to 7+ years of experience in developing solutions for data warehouse loads and system integrations using ETL tools.
- At least 2 years of experience developing data pipelines using AWS Glue.
- Minimum of 2 years of experience with Python script development using PySpark and object-oriented ETL methods.
- Strong core competency in SQL is essential.
- At least 3 years of experience working with Big Data, including knowledge of Hive.
- Experience with creating complex data frames/structures in Hadoop for data integration and complex calculations.
- Experience with HDFS, Tez, and Spark is beneficial.
- Familiarity with Step and Lambda functions is an asset.
- Experience in data modeling and designing data structures to support high-performance SQL queries.
- Advanced SQL writing skills for handling large volumes of data efficiently.
- Ability to analyze and reverse engineer existing data integration code.
- Experience with complex multi-level data transformations to meet business needs.
- Experience with production implementation and change management processes.
- Strong experience in project management and SDLC, particularly in an Agile environment.
- Excellent analytical, conceptual, and problem-solving skills.
- Proven ability to collaborate and lead teams, with strong coaching and mentoring abilities.
- Ability to work in a global, multi-site environment with onshore/offshore teams