Sr. Data Engineer (Data Science focus)

Any US Location
Comp Range:
Depends on location
Python, Kafka, Spark

We are seeking sharp, highly-motivated data engineers who are excited to play a major role in our client's product and engineering evolution. We are looking for product focused, results-oriented engineers who thrive in a collaborative, team-focused culture. You will work closely with data scientists, analysts, product managers, business stakeholders, and your team to help define, implement, and ship significant increments to the data infrastructure that powers our client's mission-driven product offering.

  • Work across all phases of the software development lifecycle in a cross-functional, agile development team setting
  • Collaborate with data scientists and analysts to prepare complex data sets that can be used to solve difficult problems
  • Administer, maintain, and improve the data infrastructure & data processing pipeline, including ETL jobs, events processing & job monitoring & alerting.
  • Deliver high-quality, well-tested technical solutions that make sense for the problem at hand
  • Fearlessly work across components, services & concerns to deliver business value
  • Partner with engineers, data scientists & the CDO to define & refine our data architecture & technology choices
  • Help define, implement, and reinforce data engineering best practices and processes
  • Contribute to Steady’s technical vision
Job Requirements
  • Significant data engineering and/or software development experience (5-7 years minimum)
  • Experience with ingesting, processing, and transforming data at scale
  • Demonstrated proficiency with SQL, relational database, and data warehousing concepts
  • Demonstrated aptitude with ETL concepts and tools such as Airflow or AWS Glue
  • Experience of the AWS data ecosystem (Glue, Kinesis, S3, Lambda, EMR, Redshift, etc.)
  • Understanding of event-driven and/or streaming workflows with tools like Kafka and Spark
  • Experience administering cloud-based analytics databases (like Snowflake or Redshift)
  • Knowledge of Python, R or other common languages used in data science
  • Ability to thrive in a fast-paced and dynamic environment
  • Ability to work well in teams of all sizes with representatives from a diverse set of technical backgrounds.
  • Preferred: Experience with infrastructure automation through tools like Terraform or CloudFormation
  • Preferred: Experience with indexing and search technologies like elasticsearch, SOLR, etc.
  • Preferred: Experience building or maintaining a data science modeling environment such as Sagemaker or Databricks, including deployment and monitoring using tools like MLFlow
  • Bachelor’s or Master’s degree in Computer Science or equivalent experience
Nice to have
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.