Apply Now
Location: Wilmington, Delaware (DE)
Contract Type: C2C
Posted: 21 hours ago
Closed Date: 04/27/2026
Skills: Data pipelines on Databricks,PySpark
Visa Type: Any Visa

Job Title:  Azure Databricks Engineer (PySpark and Data Lake)

Location: Wilmington, DE – 5 Days onsite role

Long Term Project Local candidate will get the first preference.  

  DE (Locals or nearby states, F2F interview) – 14+ yrs exp

Azure is preferred as cloud. 

 Overview

We are seeking a Data Engineer to lead the modernization of legacy ETL systems by migrating Ab Initio workflows to scalable, modular PySpark pipelines on Databricks. The role involves transforming complex data ecosystems into cloud-native architectures while ensuring data integrity, performance, and reliability.

Key Responsibilities

 ETL Modernization & Development

 Analyze and migrate legacy ETL workflows from Ab Initio to PySpark-based pipelines

Design and develop scalable data pipelines on Databricks

Refactor monolithic processes into modular, reusable components

Leverage existing enterprise datasets to avoid redundancy

Data Integration & Processing

Build and maintain ETL/ELT pipelines integrating data from Snowflake and other sources

Process and publish enriched datasets for downstream applications

Support batch and near real-time data processing

Data Lineage & Optimization

Create end-to-end data lineage and data flow diagrams

Identify redundancies and drive process consolidation and optimization

Ensure adherence to data governance and quality standards

Testing & Validation

Develop unit, integration, and reconciliation frameworks

Perform dual-run comparisons with legacy systems

Validate outputs in UAT and pre-production environments