Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: Mastercard
Location: Pune, MH, India
Career Level: Mid-Senior Level
Industries: Banking, Insurance, Financial Services

Description

Our Purpose

Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.

Title and Summary

Data Engineer II  Job Overview
 The Software Engineer II will contribute to the design, development, and maintenance of big data solutions within the Hadoop ecosystem. The role involves working with distributed data processing frameworks and supporting the creation of reliable, high-performance data pipelines.
 The engineer will be responsible for hands-on development and troubleshooting using the Cloudera distribution, along with related ecosystem tools such as Apache Ozone, Apache Iceberg, Apache Airflow, and Apache NiFi. The position requires hands on experience in Apache Spark to support large-scale, massively parallel data processing tasks.
 The Software Engineer II will collaborate closely with senior team members to implement best practices, optimize data workflows, and ensure seamless integration of components across the data platform.

 Role

 Contribute to the design, development, and enhancement of data engineering solutions within the Hadoop ecosystem, ensuring robust, scalable, and efficient data processing.
 Work hands-on with the Cloudera distribution, implementing and maintaining data pipelines and platform components using tools such as Apache Ozone, Apache Iceberg, Apache Airflow, and Apache NiFi.
 Develop and optimize large-scale data processing jobs using Apache Spark, leveraging massively parallel processing frameworks for high-performance workloads.
 Collaborate closely with senior engineers, architects, and cross-functional teams to implement best practices, improve workflow reliability, and support platform-level enhancements.
 Participate in performance tuning, troubleshooting, and root cause analysis of distributed systems and data pipelines in production environments.
 Contribute to the evaluation and adoption of emerging technologies in data storage, orchestration, and distributed processing to continuously improve system performance and efficiency.
 Ensure adherence to operational standards, including reliability, quality, and security requirements, while meeting project timelines and SLAs.
 Support documentation, automation, and incremental improvements to engineering processes and data platform components.

 Education

 Bachelor's degree in Information Technology, Computer Science or Management Information Systems or equivalent work experience.


 Knowledge / Experience:
 experience in related field, including around 2 years of experience in delivering secure solutions in Financial Services Sector is preferred.
 Thorough knowledge and understanding of Software Engineering Concepts and Methodologies is required.
 Demonstrate MC Core Competencies.

 About You
 You are proactive, detail-oriented, and able to work independently while collaborating effectively within a team environment.
 You have a strong ability to learn and adapt to new big data technologies, tools, and frameworks, continuously improving your technical depth.
 You bring 3+ years of overall experience, including 2+ years of hands-on experience working on Hadoop ecosystem projects with the Cloudera distribution.
 You have practical experience building and maintaining distributed data pipelines using Apache Spark for massively parallel processing.
 You have hands-on knowledge of Apache Ozone and Apache Iceberg, with experience implementing distributed storage and modern table formats.
 You must have knowledge in creating and managing orchestration workflows using Apache Airflow and Apache NiFi for complex data movement and integration.
 You possess strong analytical and debugging skills, with the ability to troubleshoot distributed systems and resolve production issues effectively.
 You communicate clearly, work well in collaborative, multi-location teams, and maintain strong organizational discipline.
 You are adaptable, self-driven, and eager to contribute to the continuous improvement of data engineering practices and platform capabilities.

Corporate Security Responsibility


All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:

  • Abide by Mastercard's security policies and practices;

  • Ensure the confidentiality and integrity of the information being accessed;

  • Report any suspected information security violation or breach, and

  • Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.




 Apply on company website