InductiveHealth

Manager of Engineering, Platform & Interoperability

InductiveHealth • US
Java Remote
Mission + People + Culture:  With a corporate Mission to stop disease through technology, InductiveHealth is the market leader in software-as-a-service (SaaS) solutions to public health agencies.  Our People come from all backgrounds and walks of life ranging from world class experts in epidemiology, informatics, and disease surveillance to engineers and product teams building high performance, modern solutions.  Mission + People are unified around a virtual first Culture centered around teamwork, relentless focus on client outcomes, and individual accountability.

Why work at InductiveHealth?
1. Motivation: We value initiative-takers and self-starters who want to contribute to the success of our Team and client outcomes.
2. Curiosity: Seeking to understand and comprehend is critical - we expect and encourage questions to master job duties and grow professionally.
3. Organization: We are a Team. This means we hold each other accountable and have high expectations for performance and outcomes.
4. Feedback: "Open and honest" is part of our corporate values that builds a culture of professional growth to support client success.
5. Impact: Your individual contributions will stop the spread of disease and improve individual, community, and population health outcomes.

InductiveHealth Informatics is seeking a Manager of Platform & Interoperability Engineering to lead two critical areas of our engineering organization: core platform services and data integration systems. This role will be responsible for building and scaling the foundational capabilities that power our public health products—ranging from identity and API infrastructure to complex data pipelines and interoperability frameworks. 

You will oversee teams responsible for enabling secure access, consistent service architecture, and reliable data exchange across systems. This role requires a systems thinker who can balance platform standardization with flexible integration patterns, modernize legacy pipelines, and ensure our ecosystem is scalable, secure, and built for long-term evolution. 

What You’ll Be Doing

  • Lead and support engineering teams across Platform Engineering and Interoperability, ensuring alignment on architecture, delivery, and engineering standards, supporting the organization’s growth and focus areas 

  • Drive the design and evolution of core platform services, including identity (SSO/MFA), API gateway, and shared service layers used across products 

  • Own and scale identity and access management solutions (e.g., Keycloak), including federation, configuration, and secure authentication workflows 

  • Define and enforce best practices for API design and gateway management & governance, including authentication, rate limiting, and observability, ensuring APIs are observable, well-documented and developer friendly.  

  • Lead the development and scaling of data integration architecture, including Apache NiFi and legacy HL7-based processing systems 

  • Drive migration of legacy data pipelines into modern frameworks while maintaining continuity and system reliability 

  • Design and optimize data flows across APIs (FHIR/REST), SFTP, and internal systems, supporting both batch and near real-time processing 

  • Establish integration patterns, standards, and reusable components that can be leveraged across teams 

  • Ensure strong data validation, transformation, and error-handling practices across all pipelines supporting easier troubleshooting 

  • Implement and evolve monitoring and observability practices (e.g., OpenSearch) to ensure system health and performance, with a focus on fast detection and recovery. 

  • Partner with Product, Security, and Compliance to align platform and interoperability capabilities with regulatory and business needs, enabling great client and developer experiences 

  • Guide architectural decisions across platform and integration layers, ensuring consistency, scalability, maintainability and documentation 

  • Mentor engineers and technical leads, reinforcing best practices across platform services and data systems and modeling a growth mindset, raising the bar for engineering excellence 

  • Lead incident response and root-cause analysis for complex system and integration issues, and drive long-term resolution strategies that prevent recurrence 

What We’re Looking For

  • Experience leading engineering teams across platform services, infrastructure, or data integration domains 

  • Strong understanding of identity and access management concepts, including SSO, MFA, and federation 

  • Hands-on experience with identity providers such as Keycloak or similar technologies 

  • Experience designing and managing API gateways with a focus on security, performance, and observability 

  • Strong experience building and scaling data pipelines using Apache NiFi or similar integration frameworks 

  • Deep understanding of healthcare data standards such as HL7 and FHIR 

  • Experience integrating systems via APIs (REST/FHIR) and secure file transfer methods (SFTP) 

  • Proficiency in backend development (e.g., Java or .NET) to support platform or integration services 

  • Strong understanding of distributed systems, including scalability, fault tolerance, and performance optimization 

  • Experience implementing logging, monitoring, observability and operational readiness practices across systems 

  • Ability to balance modernization efforts with ongoing support of legacy systems 

  • Experience working across multiple teams and influencing architectural direction 

  • Strong problem-solving skills and ability to navigate complex system and data challenges 

  • Excellent communication skills with the ability to translate platform and integration concepts into actionable guidance 

What Will Make You Stand Out

  • Experience leading both platform engineering and interoperability/data integration functions within the same organization, adopting a platform-as-a-service and product mindset.  

  • Hands-on experience scaling Apache NiFi in enterprise environments, including performance tuning and flow design 

  • Strong experience improving identity and access management at scale, including MFA implementations and federation strategies 

  • Experience building intelligent service layers, such as patient matching systems (deterministic or probabilistic) 

  • Familiarity with caching strategies (e.g., Redis) to improve performance across platform or integration layers 

  • Experience working with observability tools such as OpenSearch for monitoring distributed systems 

  • Exposure to infrastructure-as-code or configuration tools (e.g., YAML, Bicep) 

  • Experience working with geospatial or public health data systems and complex data ecosystems 

  • Proven success leading large-scale modernization efforts across both platform services and integration pipelines 

  • Ability to drive adoption of shared platform capabilities while supporting team autonomy and flexibility 

  • A systems-oriented mindset with a strong ability to balance standardization, scalability, and real-world implementation constraints