Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: SS&C Technologies
Location: Kansas City, MO
Career Level: Mid-Senior Level
Industries: Technology, Software, IT, Electronics

Description

As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology.

Job Description

Job Title: Senior Data Platform Software Engineer

Location: Kansas City, MO (Remote Eligible)

Company Overview

SS&C is a global leader in investment and financial services software, serving over 18,000 organizations worldwide. Headquartered in Windsor, Connecticut, SS&C is dedicated to driving innovation and client satisfaction across the financial services and healthcare industries.

Job Description:  

We are seeking a skilled Senior Data Platform Software Engineer to join our Data Platform team in Kansas City, MO. In this role, you will be instrumental in designing, coding, implementing, and optimizing a cloud-native data stack that leverages best-in-class open-source tools. The ideal candidate will design, build, and maintain an opinionated, resilient, and scalable data platform in a private cloud environment—enabling data-driven decision-making, analytics, and machine learning, while providing deep insights out of the box. This role blends data engineering, software development, and infrastructure management, leveraging tools such as Apache Iceberg, Java, Airflow, Spark, Kafka, and Superset.

Core Responsibilities

Data Pipelines

  • Develop and maintain robust, fault-tolerant data ingestion and transformation pipelines using Java, Python and Spark.
  • Define flexible and scalable data schemas using Apache Iceberg.
  • Support both batch and real-time data processing, including integration with Apache Kafka.
  • Ensure reliability, observability, and integrity of data pipelines.

Data APIs

  • Design and implement scalable, secure RESTful and data APIs for data access and integration.
  • Build APIs for data ingestion, transformation, and consumption across internal services.
  • Ensure API performance, consistency, and proper access controls.
  • Apply best practices for API design, versioning, and Swagger documentation.
  • Integrate APIs with orchestration tools like Apache Airflow and metadata platforms.
  • Enable seamless interoperability between data consumers, pipelines, and governance systems.

Metadata Management & Data Governance

  • Evaluate and implement metadata management platforms such as DataHub, Apache Atlas, or OpenMetadata to support data cataloging, lineage, and governance use cases.
  • Collaborate with data stakeholders to align metadata solutions with organizational needs.
  • Define and enforce governance policies related to data quality, privacy, and compliance (e.g., GDPR, CCPA).
  • Implement fine-grained access controls, encryption, and auditing with a focus on regulatory compliance and data traceability.

Automation & CI/CD

  • Automate data workflows, infrastructure provisioning, and deployments using tools like Airflow, Ansible, Salt and Kubernetes.
  • Implement CI/CD pipelines for data platform updates and enhancements.

Performance Optimization

  • Optimize data storage and queries using Apache Iceberg and Spark to ensure high performance and low-latency access.
  • Identify and address performance bottlenecks; implement partitioning, caching, and indexing strategies.

Monitoring and Alerting

  • Monitor data platform health using tools such as Prometheus and Grafana dashboards.
  • Configure real-time alerts to proactively detect and resolve pipeline failures or data issues.
  • Troubleshoot and resolve platform outages and data incidents promptly.

Collaboration

  • Work closely with data scientists, analysts, and engineers to understand data needs and deliver performant, scalable solutions.
  • Collaborate with cross-functional teams (Cloud Engineering, Network, and DevOps/Solutions Engineering) to troubleshoot and resolve infrastructure issues.

Qualifications

Education

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.

Experience

  • 5–8+ years of experience in data engineering, with a strong focus on cloud-based data platforms.

Technical Skills

  • Strong programming skills in Java.
  • Deep knowledge of Apache Iceberg, Spark, Superset, and Kafka.
  • Familiarity with metadata management platforms like DataHub, Apache Atlas, or OpenMetadata and experience with their evaluation or implementation.
  • Experience with cloud-native infrastructure tools such as Kubernetes, Ansible, Salt, etc.

Soft Skills

  • Strong analytical and problem-solving skills.
  • Effective communication and collaboration with cross-functional teams.

Unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.

SS&C offers excellent benefits including health, dental, 401k plan, tuition and professional development reimbursement plan.

SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws.


 Apply on company website