Location:
Pleasanton, California (CA)
Contract Type: C2C
Posted: 2 days ago
Skills: Azure (ADF, Synapse, Fabric), AWS (Glue, EMR, S3, Lambda), or GCP (Dataflow, BigQuery)
Title: Sr. Data Engineer
Location: Pleasanton, CA (Day 1 Onsite)
Job: Contract
Job Description :
- Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field.
- 6+ years of professional experience in Data Engineering or Big Data ecosystem roles. Strong hands-on experience with Databricks (PySpark, Spark SQL, Delta Lake, Workflows).
- Proficiency in Python and SQL for building scalable data transformations.
- Expertise in one or more cloud platforms: Azure (ADF, Synapse, Fabric), AWS (Glue, EMR, S3, Lambda), or GCP (Dataflow, BigQuery).
- Experience with data warehousing technologies such as Snowflake, Redshift, Synapse, or BigQuery.
- Familiarity with streaming frameworks like Kafka for real-time data ingestion.
- Strong understanding of data modeling, dimensional modeling, and data architecture principles.
- Experience working with Airflow, CI/CD pipelines, and containerized environments (Docker, Kubernetes).
- Excellent communication and leadership skills, capable of collaborating with distributed teams.
- Preferred Skills Certifications in Databricks, Snowflake SnowPro, or AWS/Azure Data Engineering.
- Experience implementing data governance and metadata management frameworks.
- Knowledge of machine learning pipelines and integration with downstream AI/ML teams.
- Exposure to ELK Stack, Power BI, or Tableau for data visualization. Prior experience building or managing real-time analytics pipelines in large enterprise environments.