Lead Data Engineer

Submit Your Application

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!

  • Location: Ontario
  • Type: Contract
  • Job #2297

Lead Data Engineer
Jarvis Consulting Group

 

Location: Hybrid, downtown Toronto (2x/week)
Start Date: ASAP

We have an exciting opportunity for the right person – do you have Big Data experience, Spark ETL, Scala, Databricks, SQL development and want to be part of a new team? We are looking for you. You will be working with one of the best new consulting companies and have an integral role in their new data consulting practice.

 

Work you will do 

  • Understand Big Data and Cloud environments from a performance and capacity management standpoint, using the tools needed to onboard, monitor and move data into these environments
  •  Collaborate as the Big Data and Cloud Technical lead with multiple technology teams to ensure that the appropriate associated applications, integrations, infrastructure, and security architecture are designed to meet evolving business requirements, meet standards for reliability, scalability, performance, and availability
  •  Be the lead for internal and external products and projects.  Helping to build world class big data software that solves real problems.
  • Responsible for the successful implementation of technical solutions for projects, supporting highly complex business applications with complex integration needs across multiple technology disciplines
  • Formulate and define project scope, and objectives based on a thorough understanding of the technical requirements of the projects
  • Use sound Agile development practices (code reviews, unit testing, etc.) to develop and deliver quality code and data products
  •  Provide day-to-day support and technical expertise to both technical and non-technical teams
  • Help to develop junior talent and transform them into experts
  • Work with other engineers to brainstorm solutions to problems and support bank objectives

Who we are looking for

  • Build out scalable and reliable ETL pipelines and processes to ingest data from a large number and variety of data sources.
  • 4+ years Spark Development experience using Scala
  • Knowledge of Hadoop and Cloud Big Data ecosystem (Spark, Hive, HDFS, Azure/AWS/GCP, etc)
  • Experience with streaming technologies such as Kafka and Spark Streaming
  • Experience with Agile development (JIRA / Confluence)
  • Experience with version control systems, such as Git
  • Knowledge of dev tools like IDE’s, Maven, SBT
  • Knowledge of orchestration tools like Airflow, Autosys, Oozie and Cron
  • Experience with containerization (Docker) and orchestration (Kubernetes) is a plus
  • Experience with developing and deploying applications to the cloud environment

 

Who you are

  • Works well both individually and as part of a team
  • Proven ability to work creatively in a problem-solving environment
  • Strong communication skills
  • Able to work closely with technical and non-technical team in a collaborative environment
  • University degree in relevant STEM discipline (Computer Sciences, Electrical/Computer/Software Engineering and Mathematics) 

Submit Your Application

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!

Scroll to Top