Link copied to clipboard!
Back to Jobs
Data Engineer at innovitusa
innovitusa
Des Moines, IA
Information Technology
Posted 0 days ago
Job Description
Hiring: W2 Candidates Only Visa:Open to any visa typewith valid work authorization in the USA Key Responsibilities: Assemble large complex data sets that meet functional and non-functional business requirements. Identify design and implement process improvements including automation data delivery optimization and infrastructure redesign for scalability. Lead and deliver data-driven solutions across multiple languages tools and technologies. Contribute to architecture discussions solution design and strategic technology adoption. Build and optimize highly scalable data pipelines incorporating complex transformations and efficient code. Design and develop new source system integrations from varied formats (files database extracts APIs). Design and implement solutions for delivering data that meets SLA requirements. Work with operations teams to resolve production issues related to the platform. Apply best practices such as Agile methodologies design thinking and continuous deployment. Develop tooling and automation to make deployments and production monitoring more repeatable. Collaborate with business and technology partners providing leadership best practices and coaching. Mentor peers and junior engineers; educate colleagues on emerging industry trends and technologies. Required Qualifications Bachelors degree in Computer Science Software Engineering or related field or equivalent experience 7 years of data engineering/development experience including Python or Scala SQL and relational/non-relational data storage. (ETL frameworks big data processing NoSQL) 3 years of experience in distributed data processing (Spark) and container orchestration (Kubernetes) Proficiency in data streaming in Kubernetes and Kafka Experience with cloud platforms - Azure preferred; AWS or Google Cloud Platform also considered. Solid understanding of CI/CD principles and tools Familiarity with big data technologies such as Hadoop Hive HBase Object Storage (ADLS/S3) Event Queues. Strong understanding of performance optimization techniques such as partitioning clustering and caching Proficiency with SQL key-value datastores and document stores Familiarity with data architecture and modeling concepts to support efficient data consumption Strong collaboration and communication skills; ability to work across multiple teams and disciplines.Preferred Qualifications Masters degree in Computer Science Software Engineering or related field Knowledge of data governance metadata management or data quality/observability Familiarity with schema design and data contracts Experience handling various file formats (video audio image) Experience with Databricks Snowflake or similar platforms Experience designing and implementing robust data ingestion frameworks for heterogeneous data sources (structured/unstructured files external suppliers supplier chain systems). Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Experience: years Vacancy: 1
Resume Suggestions
Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.
Quantify your achievements with specific metrics and results whenever possible to show impact.
Emphasize your proficiency in relevant technologies and tools mentioned in the job description.
Showcase your communication and collaboration skills through examples of successful projects and teamwork.