Data Engineer

Submit Your Application

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!

  • Location: Ontario
  • Type: Permanent
  • Job #2571

Role: Data Engineer
Type: Full time (preferred)
Years of experience: 3-5+
Location: Hybrid  

About This Role

We are looking for someone to embrace a broad range of tasks associated with developing data/ETL pipelines to address business challenges. Sitting in this position will help expand your knowledge, strengthen your expertise and introduce you to the inner workings of our business alongside a team of seasoned, diversely-skilled technology professionals.

 Here’s some of what you may be asked to perform:

  • Identify data sources, model data (holistic, conceptual, logical, physical) and design pipelines
  • Implementing pipelines including the ingestion, processing, and storing data.
  • Sharing pipeline models and designs to inform project team members and improve implementation.
  • Collaborate across teams to understand data sources and sets.
  • Dive into documentation repositories to research internal data sets.
  • Patiently share knowledge and expertise with team members.
  • Communicate technical constraints and challenges to audiences of varying technical expertise.
  • Transform business requirements and research into reliable, performance delivery solutions.
  • Aim for defect-free programming, create and maintain quality code, provide support during testing cycles and post- production deployment, engage in peer code reviews.
  • Contribute to project plans, estimations and status updates.
  • Identify issues, develop and maintain processes that address and resolve them, (and be sure to communicate/alert stakeholders as needed).
  • Configure and develop custom components with technology partners (analysts, developers, designers etc.) to meet requirements and goals.
  • Ensure applications are free of common coding vulnerabilities (and follow standard security practices).
  • Proactively put forward ideas that speak to project objectives (e.g. development, testing solutions, and tools).
  • Take part in scope assessment, risk and cost analysis.
  • Stay on top of state-of-health monitoring and monthly SLA targets.
  • Apply and share technical expertise during incident management life cycle (e.g. analyzes reports and outages, perform impact assessments, facilitate stakeholder communication).

What can you bring to the team?

  • Undergraduate Degree or Technical Certificate.
  • 5+ years of experience with pySpark, pandas or a comparable data manipulation API
  • 2-3 years of experience with Azure Databricks
  • Familiarity with Datadog (Observability), Azure Data Lake Storage, Azure Data Factory, Salt Scripts, Delta Lake Protocol
  • Curiosity, commitment and empathy to collaborate across teams, learn about data sources and build a shared understanding of how data drives fraud analytics.
  • Readiness and motivation (as senior or lead developer and valued subject matter expert) to address and resolve highly complex and multifaceted development-related issues, often independently.
  • Strength in coaching and advising clients, partners and project teams.

Submit Your Application

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!

Scroll to Top