For A Reputed Large Multinational Technology Company
4 - 9 Years
Full Time
Up to 30 Days
Up to 25 LPA
1 Position(s)
Chennai, Pune, Greater Noida
4 - 9 Years
Full Time
Up to 30 Days
Up to 25 LPA
1 Position(s)
Chennai, Pune, Greater Noida
Posted By : Nilasu Consulting Services Pvt Ltd
B2 Band – 4yrs to 6yrs
B3 Band – 7yrs to 10yrs
Notice Period – Immediate to 30 days
Location - Chennai/Pune/GNDC.
Responsibilities:
• Develop and maintain Hadoop applications: Write efficient and scalable code for data ingestion, processing, and analysis using Hadoop ecosystem tools (HDFS, Hive, HBase) and PySpark.
• Data pipeline development: Design and implement end-to-end data pipelines for batch and real-time processing.
• Data transformation: Utilize PySpark to transform and aggregate data from various sources.
• Performance optimization: Continuously monitor and optimize Hadoop jobs to ensure efficient resource utilization and timely processing.
• Collaboration: Work closely with business analysts and Data analysts to translate business requirements into technical solutions.
• Testing and deployment: Provide testing support during SIT/UAT phase and assist in deploying solutions to production environments.
Required Skills and Experience:
• Hadoop ecosystem: Proficiency in Hadoop core components and related tools (Hive, HBase, Sqoop).
• PySpark expertise: Strong PySpark skills with experience in developing data processing pipelines.
• Python programming: Excellent Python programming skills with a focus on data manipulation and analysis libraries (Pandas, NumPy).
• SQL proficiency: Ability to write efficient SQL queries for data extraction and analysis within Hive or other SQL-like interfaces.
• Problem-solving: Strong analytical and problem-solving skills to tackle data-related challenges.
• Communication: Effective communication skills to collaborate with diverse stakeholders.
• Other: Retail Banking domain experience and Data Stage knowledge will be good to have.
Responsibilities:
• Technical leadership: Provide technical guidance and mentorship to a team of Hadoop developers.
• Architecture design: Design and implement scalable and reliable big data architectures using Hadoop and PySpark.
• Code review and optimization: Review code for quality and performance, and lead efforts to optimize Data pipelines.
• Project management: Plan and manage the development and deployment of data processing projects.
• Stakeholder collaboration: Collaborate with business stakeholders to understand requirements and translate them into technical solutions.
• Innovation: Proactively recommend improvements to the system.
Required Skills and Experience:
• Proven leadership: Experience leading or mentoring technical teams.
• Hadoop/PySpark expertise: Advanced proficiency in Hadoop and PySpark, including experience with complex data processing pipelines.
• Architecture design: Experience designing and implementing scalable big data architectures.
• Performance optimization: Expertise in optimizing Hadoop/PySpark jobs for efficiency and performance.
• Communication: Excellent communication and interpersonal skills to effectively collaborate with stakeholders across departments.
• Other: Retail Banking domain experience and Data Stage knowledge will be good to have.
For An Indian Multinational Information Technology Company
Bangalore / Bengaluru, Hyderabad
5 - 8 Years ( Full Time )
Adb, Advance Sql, Azure Data Bricks, Databricks, Data Modelling, Pyspark, Python, Sql
Not disclosed
For A French Mnc It Company
Bangalore / Bengaluru
6 - 9 Years ( Full Time )
Javascript, Pyspark, Python, Sql
Not disclosed
For International Trade And Development Company
Ltimindree Locations
5 - 8 Years ( Full Time )
Adb, Azure Databricks (Adb), Pyspark, Sql
Not disclosed