Back to Jobs
Ohm Systems

Data Engineer at Ohm Systems

Ohm Systems Remote - Beaverton, OR

Job Description

Manager Notes: This position can be 100% remote Candidates need to have previous AI/ML experience Work history In Finance Preferred Experienced in Databricks Spark Python snowflake Must have STRONG SQL skills Role Responsibilities: Contribute to Design and implement data products and features in collaboration with product owners data analysts and business partners using Agile / Scrum methodology Contribute to overall architecture frameworks and patterns for processing and storing large data volumes Contribute to evaluation of new technologies/tools/frameworks centered around high-volume data processing Translate product backlog items into logical units of work in engineering Implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem Build utilities user defined functions libraries and frameworks to better enable data flow patterns Work with engineering leads and other teams to ensure quality solutions are implemented and engineering best practices are defined and followed Build and incorporate automated unit tests and participate in integration testing efforts Utilize and advance continuous integration and deployment frameworks Troubleshoot data issues and perform root cause analysis Work across teams to resolve operational & performance issues The following qualifications and technical skills will position you well for this role: Bachelors degree in Computer Science or related technical discipline 7 years of experience in large-scale software development 5 years of big data experience Programming experience Python or Scala preferred. Experience working with Hadoop and related processing frameworks such as Spark Hive etc. Experience with messaging/streaming/complex event processing tooling and frameworks Experience with data warehousing concepts SQL and SQL Analytical functions Experience with workflow orchestration tools like Apache Airflow Experience with source code control tools like Github or Bitbucket Ability to communicate effectively both verbally and written with team members Interest in and ability to quickly pick up new languages technologies and frameworks Experience in Agile/Scrum application development The following skills and experience are also relevant to our overall environment and nice to have: Experience with Java Experience working in a public cloud environment particularly AWS databricks Experience with cloud warehouse tools like Snowflake Experience working with NoSQL data stores such as HBase DynamoDB etc. Experience building RESTful APIs to enable data consumption Experience with build tools such as Terraform or CloudFormation and automation tools such as Jenkins or Circle CI Experience with practices like Continuous Development Continuous Integration and Automated Testing These are the characteristics that we strive for in our own work. We would love to hear from candidates who embody the same: Desire to work collaboratively with your teammates to come up with the best solution to a problem Demonstrated experience and ability to deliver results on multiple projects in a fast-paced agile environment Excellent problem-solving and interpersonal communication skills Strong desire to learn and share knowledge with others Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Experience: years Vacancy: 1

Resume Suggestions

Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.

Quantify your achievements with specific metrics and results whenever possible to show impact.

Emphasize your proficiency in relevant technologies and tools mentioned in the job description.

Showcase your communication and collaboration skills through examples of successful projects and teamwork.

Explore More Opportunities