Back to Jobs
BHFT

Data Engineer at BHFT

BHFT Remote - New York, NY

Job Description

Data Engineering team is responsible for designing building and maintaining the Data Lake infrastructure including ingestion pipelines storage systems and internal tooling for reliable scalable access to market data. Key ResponsibilitiesIngestion&Pipelines: Architect batchstream pipelines (Airflow Kafka dbt) for diverse structured and unstructured marked data. Provide reusable SDKs in Python and Go for internal data producers.Storage&Modeling: Implement and tune S3 columnoriented and timeseries data storage for petabytescale analytics; own partitioning compression TTL versioning and cost optimisation.Tooling & Libraries: Develop internal libraries for schema management data contracts validation and lineage; contribute to shared libraries and services for internal data consumers for research backtesting and real-time trading purposes.Reliability & Observability: Embed monitoring alerting SLAs SLOs and CI/CD; champion automated testing data quality dashboards and incident runbooks.Collaboration: Partner with Data Science QuantResearch Backend and DevOps to translate requirements into platform capabilities and evangelise best practices.Qualifications : 5 years of experience building and maintaining production-grade data systems with proven expertise in architecting and launching data lakes from scratch.Expert-level Python development skills (Go and C nice to have).Hands-on experience with modern orchestration tools (Airflow) and streaming platforms (Kafka).Advanced SQL skills including complex aggregations window functions query optimization and indexing.Experience designing high-throughput APIs (REST/gRPC) and data access libraries.Solid fundamentals in Linux containerization (Docker) and cloud object storage solutions (AWS S3 GCS).Strong knowledge of handling diverse data formats including structured and unstructured data with experience optimizing storage strategies such as partitioning compression and cost management.Fluency in English for confident communication documentation and collaboration within an international team.Additional Information : What we offer:Working in a modern international technology company without bureaucracy legacy systems or technical debt.Excellent opportunities for professional growth and self-realization.We work remotely from anywhere in the world with a flexible schedule.We offer compensation for health insurance sports activities and professional training.Remote Work : YesEmployment Type : Full-time Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Department / Functional Area: Data Engineering Experience: years Vacancy: 1

Resume Suggestions

Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.

Quantify your achievements with specific metrics and results whenever possible to show impact.

Emphasize your proficiency in relevant technologies and tools mentioned in the job description.

Showcase your communication and collaboration skills through examples of successful projects and teamwork.

Explore More Opportunities