Apply Now
Location: Atlanta, Georgia (GA)
Contract Type: C2C
Posted: 5 hours ago
Closed Date: 03/17/2026
Skills: DataStage/SQL / ETL
Visa Type: Any Visa

Job Title : Data Engineer – (DataStage/SQL / ETL ) 

Location: Atlanta,Georgia _ Day 1 Onsite – 5 Days week Office – Only Locals – F2F interview MUST

Experience : 12+ Years MUST 

Duration : 12 Months

 

Job Overview:

We are seeking a skilled Data Engineer with strong expertise in SQL, ETL development, and hands-on experience with Rhine (or similar metadata-driven orchestration frameworks). The ideal candidate will play a key role in building scalable data pipelines, managing data transformation workflows, and supporting analytics initiatives across the enterprise.

Mandatory Skills: DataStage (6+ Years Must along with the current project), Microsoft SQL Server, Data warehouse, ETL, SSIS, Power shell, Python, Informatica, Talend, Python, Rhine is not Mandatory/Needed

Key Responsibilities:

• Design, develop, and maintain ETL pipelines using best practices and enterprise data architecture standards.

• Write advanced SQL queries for data extraction, transformation, and analysis from structured and semi-structured data sources.

• Work with Rhine-based pipelines to enable dynamic, metadata-driven data workflows.

• Collaborate with data architects, analysts, and business stakeholders to understand data requirements and implement robust solutions.

• Ensure data quality, consistency, and integrity across systems.

• Participate in performance tuning, optimization, and documentation of data processes.

• Troubleshoot and resolve issues in data pipelines and workflows.

• Support deployment and monitoring of data jobs in production environments.

Required Qualifications:

• Bachelor's degree in Computer Science, Engineering, Information Systems, or related field.

• Strong hands-on experience with SQL (complex joins, window functions, CTEs, performance tuning).

• Proven experience in ETL development using tools like Informatica, Talend, DataStage, or custom Python/Scala frameworks.

• Familiarity with or experience in using Rhine for metadata-driven pipeline orchestration.

• Working knowledge of data warehousing concepts and dimensional modeling.

• Exposure to cloud platforms (AWS, Azure, or GCP) and tools such as Snowflake, Redshift, or BigQuery is a plus.

• Experience with version control (e.g., Git) and CI/CD for data jobs