Link copied to clipboard!
Back to Jobs
Data Engineer at VDart Inc
VDart Inc
New Boston, NH
Information Technology
Posted 0 days ago
Job Description
Title: Data Engineer Location: Boston MA (Onsite) Type: Contract Job Description: An AWS data engineer job description typically involves designing building and maintaining scalable data pipelines architectures and solutions on the Amazon Web Services (AWS) cloud platform. Key responsibilities include data integration building ETL processes using services like AWS Glue and Redshift data modeling and ensuring data quality and security. This role often requires proficiency in programming languages like Python and skills with other technologies such as SQL Apache Spark and serverless architectures. Key responsibilities: AWS data engineers are responsible for designing and building data pipelines and developing ETL/ELT processes using tools such as AWS Glue EMR and Redshift to prepare data for analytics. They integrate data from various sources and create and maintain data models efficient for storage and analysis. Ensuring data quality security and compliance through implementing checks validation processes and security best practices is also crucial. The role involves monitoring and optimizing data processing jobs and databases for performance collaborating with stakeholders to understand data requirements and maintaining and operationalizing existing data solutions. Minimum Skills Required: At least 6 years of relevant experience in design development complete end-end design of enterprise-wide big data solution. Experience in designing & developing a big data solution using Spark Scala AWS Glue Lambda SNS/SQS Cloudwatch is a must. Strong Application development experience in Scala/Python. Strong Database SQL experience preferably Redshift. Experience in Snowflake is an added advantage. Experience with ETL/ELT process and frameworks is a must. Strong background in AWS cloud services like lambda glue s3 emr sns sqs cloudwatch redshift Expertise in SQL and experience with relational databases like Oracle MySql PostgreSQL Proficient in Python programming for data engineering tasks automations Experience with shell scripting in Linux/Unix environments. Experience with Big Data Hadoop Spark Financial Services experience required Nice to have - knowledge in Machine Learning models regression validation Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Experience: years Vacancy: 1
Resume Suggestions
Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.
Quantify your achievements with specific metrics and results whenever possible to show impact.
Emphasize your proficiency in relevant technologies and tools mentioned in the job description.
Showcase your communication and collaboration skills through examples of successful projects and teamwork.