Senior Data Engineer (#4260)

South America
Work type:
Office/Remote
Technical Level:
Senior
Job Category:
Software Development
Project:
CTO Office

N-iX is expanding in Latin America! As a global software development company, we unite over 2,400 professionals across Europe and the Americas, delivering innovative solutions for industry-leading enterprises and Fortune 500 companies. Recognized as a Great Place to Work in Colombia, we invite you to join our growing team, enjoy flexible work options, collaborate on international projects, and thrive in a vibrant, inclusive tech community.

We are seeking a skilled and detail-oriented Senior Data Engineer to join our Data & Analytics team in the CTO office for the part-time engagement. This role is freelance-friendly, and you're welcome to combine it with other projects or commitments. You will lead the technical delivery of modern data platform solutions while engaging with clients and internal stakeholders to design, implement, and optimize enterprise-grade data systems.

Key Responsibilities:

  • Design, build, and maintain robust data pipelines within Databricks.
  • Collaborate closely with international teams, including data scientists and architects, to develop scalable data solutions.
  • Debug complex issues in data pipelines and proactively enhance system performance and reliability.
  • Set up Databricks environments on cloud platforms (Azure/AWS).
  • Automate processes using CI/CD practices and infrastructure tools such as Terraform.
  • Create and maintain detailed documentation, including workflows and operational checklists.
  • Develop integration and unit tests to ensure data quality and reliability.
  • Migrate legacy data systems to Databricks, ensuring minimal disruption.
  • Participate actively in defining data governance and management strategies.

Qualifications:

  • 5+ years of proven experience as a Data Engineer.
  • Advanced proficiency in Python for developing production-grade data pipelines.
  • Extensive hands-on experience with Databricks platform.
  • Strong knowledge of Apache Spark for big data processing.
  • Familiarity with cloud environments, specifically Azure or AWS.
  • Proficiency with SQL and experience managing relational databases (MS SQL preferred).
  • Practical experience with Airflow or similar data orchestration tools.
  • Strong understanding of CI/CD pipelines and experience with tools like GitLab.
  • Solid skills in debugging complex data pipeline issues.
  • Proficiency in structured documentation practices.
  • B2 level or higher proficiency in English.
  • Strong collaboration skills, ability to adapt, and eagerness to learn in an international team environment.

Would be a plus:

  • Experience with Docker and Kubernetes.
  • Familiarity with Elasticsearch or other vector databases.
  • Understanding of DBT (data build tool).
  • Ability to travel abroad twice a year for on-site workshops.

 

We offer*:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

*not applicable for freelancers

×

Easy apply

    or
    Refer a friend