Job Description:
• Design, develop, and maintain data pipelines and data stores that support enterprise data science, analytics, and operations.
• Identify, track, and analyze the performance of data layer services, implementing solutions and improvements to enhance stability, efficiency, and performance.
• Collaborate with your peers across the Research and Technology organization.
Requirements:
• Bachelor’s degree in Computer Science, Information Systems or demonstrated ability in a related field is required.
• Experience with Python programming using an IDE, ideally building an application/framework/program from scratch
• Experience working with data environments in an engineering or technical capacity
• Experience working with data using structured (SQL) and/or unstructured (NoSQL) approaches.
• Experience with or exposure to data visualization principles and practices, especially with Power BI and Tableau
• Ingest data from internal and external sources using cloud-native AWS platform technologies (e.g., S3, EMR (Spark), Lambda, Kinesis, Firehose, Glue, Terraform)
• Strong communication skills and an interest in advancing your knowledge of data engineering
• Excellent time management skills
• Driven, self-motivated attitude. Ability to do great work independently with minimal direction.
• Ambitious and collaborative team player.
Benefits:
• Salary in USD
• Flexible schedule (within US Time zones)
• 100% Remote
Apply Now
Apply Now