Middle Data Engineer (#4645)

South America
Work type:
Office/Remote
Technical Level:
Middle
Job Category:
Software Development
Project:
Fortune 500 industrial supply company

We are seeking a motivated Data Engineer to join our team. In this role, you will be responsible for developing and maintaining robust data pipelines that drive our business intelligence and analytics.

Requirements:

  • 2+ years of experience in batch and streaming ETL using Spark, Python, Scala, Snowflake, or Databricks for Data Engineering or Machine Learning workloads. Snowflake and Databricks are a must.
  • 2+ years orchestrating and implementing pipelines with workflow tools like Databricks Workflows, Apache Airflow, or Luigi
  • 2+ years of experience prepping structured and unstructured data for data science models.
  • 2+ years of experience with containerization and orchestration technologies (Docker, Kubernetes discussable) and experience with shell scripting in Bash/Unix shell is preferable.
  • Proficiency in Oracle & SQL and data manipulation techniques.
  • Experience using machine learning in data pipelines to discover, classify, and clean data.
  • Implemented CI/CD with automated testing in Jenkins, Github Actions, or Gitlab CI/CD
  • Familiarity with AWS Services not limited to Lambda, S3, and DynamoDB
  • Demonstrated experience implementing data management life cycle, using data quality functions like standardization, transformation, rationalization, linking, and matching.

We offer*:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

*not applicable for freelancers

×

Easy apply

    or
    Refer a friend