Logo

Data Engineer (RARR Job 5893)

For A Next-Generation Global Technology Solutions Company
5 - 6 Years
Full Time
Immediate
Up to 7.5 LPA
1 Position(s)
Bangalore / Bengaluru, Coimbatore, Hyderabad, Pune
Posted Updated Today

Job Skills

Job Description

Data Engineering & Architecture

  • Design and build scalable data products on the Databricks platform

  • Develop and optimize data pipelines using Python and PySpark

  • Implement best practices for data modeling, ETL/ELT processes, and data quality

  • Ensure data architecture supports current and future analytical needs

Technical Delivery

  • Write clean, efficient, and maintainable Python code

  • Optimize pipeline performance and troubleshoot technical issues

  • Document technical solutions and architecture decisions

Business Collaboration

  • Translate business requirements into technical solutions

  • Communicate effectively with non-technical stakeholders

  • Participate in requirement gathering and solution design sessions

  • Provide technical guidance to business users


Required Skills & Experience

Must-Have

  • Strong Python programming skills (3+ years)

  • Databricks platform experience (hands-on implementation)

  • Data Engineering expertise: pipeline development, ETL/ELT processes

  • Data Architecture knowledge: data modeling, solution design

  • Proven experience building data products in production environments

  • Experience with cloud platforms (Azure/AWS/GCP)

Desirable

  • Data analysis skills: SQL proficiency, analytical thinking

  • Experience with data visualization tools (Power BI, Tableau, etc.)

  • Delta Lake and Lakehouse architecture knowledge

  • CI/CD implementation for data pipelines

  • Agile/Scrum methodology experience


Soft Skills

  • Excellent communication skills with both technical and business audiences

  • Ability to work independently

  • Problem-solving mindset and proactive approach

  • Stakeholder management capabilities


Technical Stack

  • Primary: Databricks, Python, PySpark, SQL

  • Cloud: Azure/AWS

  • Version Control: Git

  • Additional: Data Modeling


Deliverables

  • Functional data products ready for production use

  • Well-documented data pipelines and architectures

  • Technical documentation and knowledge transfer materials

  • Regular progress updates and stakeholder presentations