Back to Jobs
Brains Workgroup

Data Platform Engineer at Brains Workgroup

Brains Workgroup New York, NY

Job Description

Our client a major bank in New York City is looking for Data Platform Engineer.Permanent position with competitive compensation package (base range is 120-150K) excellent benefits and target bonus.Must be 2/3 days per week in New York City Office.DataPlatform EngineerLooking for a highly skilled Kafka Platform Engineer to design build and operate our enterprise event-streaming platform using Red Hat AMQ Streams (Kafka on OpenShift). In this role you will be responsible for ensuring a reliable scalable secure and developer-friendly streaming ecosystem.You will work closely with application teams to define and implement event-driven integration patterns and you will leverage GitLab and Argo CD to automate platform delivery and configuration.This position requires a strong blend of platform engineering DevOps practices Kafka cluster expertise and architectural understanding of integration/streaming patterns.Qualifications:Bachelors degree in computer science Engineering or a related field.Proven experience with Kafka administration and management.Strong knowledge of OpenShift and container orchestration.Proficiency in scripting languages such as Python or Bash.Experience with monitoring and logging tools (e.g. Splunk Prometheus Grafana).Excellent problem-solving skills and attention to detail.Strong communication and collaboration skills.Preferred QualificationsExperience with Red Hat OpenShift administration.Knowledge of service mesh patterns (Istio OpenShift Service Mesh).Familiarity with stream processing frameworks (Kafka Streams ksqlDB Flink).Experience using observability stacks (Prometheus Grafana).Background working in regulated or enterprise-scale environments.Knowledge of DevOps practices and tools (e.g. ArgoCD Ansible Terraform).Knowledge of SRE Monitoring and logging tools (e.g. Splunk Prometheus Grafana).Job Description:Kafka & AMQ Streams Engineering Design deploy and operate AMQ Streams (Kafka) clusters on Red Hat OpenShift. Configure and manage Kafka components including brokers Kraft MirrorMaker 2 Explore Kafka Connect and Schema Registry concepts and implementations. Ensure performance reliability scalability and high availability of the Kafka platform. Implement cluster monitoring logging and alerting using enterprise observability tools. Manage capacity planning partition strategies retention policies and performance tuning.Integration Patterns & Architecture Define and document standardized event-driven integration patterns including:o Event sourcingo CQRSo Pub/sub messagingo Change data captureo Stream processing & enrichmento Request-reply over Kafka Guide application teams on using appropriate patterns that align with enterprise architecture. Establish best practices for schema design topic governance data contracts and message lifecycle management.Security & Governance Implement enterprise-grade security for Kafka including RBAC TLS ACLs and authentication/authorization integration. (SSO and OAuth) Maintain governance for topic creation schema evolution retention policies and naming standards. Ensure adherence to compliance auditing and data protection requirements (Encryption at Rest and flight).Collaboration & Support Provide platform guidance and troubleshooting expertise to development and integration teams. Partner with architects SREs and developers to drive adoption of event-driven architectures. Create documentation runbooks and internal knowledge-sharing materials.CI/CD & GitOps Automation Build and maintain GitOps workflows using Argo CD for declarative deployment of Kafka resources and platform configurations. Develop CI/CD pipelines in GitLab enabling automated builds infrastructure updates and configuration promotion across environments. Maintain Infrastructure-as-Code (IaC) repositories and templates for Kafka resourcesPlease email your resume or use this link to apply directly: email: Check ALL our Jobs: openshift python bash splunk grafana Prometheus monitoring logging Redhat ksqlDB Flink DevOps ansible terraform ArgoCD Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Experience: years Vacancy: 1

Resume Suggestions

Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.

Quantify your achievements with specific metrics and results whenever possible to show impact.

Emphasize your proficiency in relevant technologies and tools mentioned in the job description.

Showcase your communication and collaboration skills through examples of successful projects and teamwork.

Explore More Opportunities