Back to Jobs
DRC Systems

Lead Data Engineer – Big Data (Cloud & Palantir) at DRC Systems

DRC Systems Remote - Melcher-Dallas, IA

Job Description

Only W2 (No H1B No OPT No C2C No 1099) Duration: 6 Months Contract with possible extension Title: Lead Data Engineer Big Data (Cloud & Palantir) Location: Remote Start Date: ASAP! Apply Now Position Overview: The Lead Data Engineer will architect build and manage scalable cloud-based data pipelines supporting enterprise analytics and reporting. This role requires deep big data expertise hands-on experience with Palantir Foundry and strong collaboration with business and analytics teams. The engineer will serve as a Subject Matter Expert (SME) and lead end-to-end delivery of data solutions across manufacturing and financial data domains. Key Responsibilities: Data Engineering & Architecture Design build and maintain large-scale data pipelines in cloud environments. Develop best practices for data ingestion transformation curation and reporting. Architect scalable pipelines for enterprise-wide financial and manufacturing data. Implement efficient ETL/ELT workflows using big data and cloud technologies. Collaboration & Leadership Act as a liaison between business users analysts and engineering teams. Lead the delivery of data-driven solutions from requirements through deployment. Guide and mentor data engineers and analysts on best practices and technical patterns. Support business stakeholders by translating their needs into technical solutions. Analytics & Optimization Collect and analyze large datasets to support efficiency improvements and predictive analytics. Ensure availability of clean reliable and well-modeled data for enterprise reporting. Recommend strategies for optimized data storage performance and cost management in the cloud. Required Qualifications: Education Bachelors Degree in Computer Science Engineering or related field. Experience 8 10 years of strong Data Engineering experience. Proven experience in cloud-based data platforms and big data ecosystems. Strong background designing and building end-to-end data pipelines at enterprise scale. Technical Skills: Hands-on experience with Palantir Foundry for building scalable data workflows. Strong knowledge of Hadoop ecosystem: Hive PySpark Spark etc. Proficiency in Python or Java for data engineering solutions. Experience with ETL tools concepts and pipelines. Expertise with NoSQL databases (HBase MongoDB) and RDBMS (SQL Server Oracle MySQL). Experience working in cloud environments (AWS Azure or GCP). Solid understanding of Git and Agile SDLC processes. Soft Skills: Strong analytical and problem-solving abilities. Excellent communication skills across technical and non-technical audiences. Ability to lead mentor and collaborate within cross-functional teams. Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Experience: years Vacancy: 1

Resume Suggestions

Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.

Quantify your achievements with specific metrics and results whenever possible to show impact.

Emphasize your proficiency in relevant technologies and tools mentioned in the job description.

Showcase your communication and collaboration skills through examples of successful projects and teamwork.

Explore More Opportunities