Middle/Senior Data Engineer for Technology Office (#2476)

Ukraine, Poland
Work type:
Office/Remote
Technical Level:
Senior
Job Category:
Software Development
Project:
Technology Office

N-iX is a software development service company with a 21-year history, leveraging Eastern European talent to serve Fortune 500 companies and tech startups. We operate in nine countries and employ over 2,000 professionals. Our Data and Analytics practice, within the Technology Office, specializes in data strategy, governance, and platforms, shaping the future for our clients.

We are seeking a Middle-Senior Data Engineer with expertise in Databricks, DBT, and Python to help us build and maintain efficient, scalable, and reliable data pipelines. The ideal candidate will have a strong background in data engineering, a deep understanding of data architecture, and the ability to work collaboratively with cross-functional teams to deliver impactful data solutions.

Responsibilities:

  • Design and Develop Data Pipelines: Build, maintain, and optimize scalable data pipelines using Databricks, DBT, and Python
  • Data Integration: Collaborate with data scientists, analysts, and other stakeholders to ensure seamless data integration across various sources and systems
  • Data Quality: Implement best practices for data quality, data governance, and data security to ensure the reliability and accuracy of data
  • Performance Optimization: Optimize performance of data processing and storage solutions to meet the requirements of low-latency and high-throughput applications
  • Troubleshooting: Troubleshoot and resolve any issues related to data pipelines, data transformations, and data storage
  • Documentation: Maintain comprehensive documentation of data pipelines, data models, and ETL processes
  • Stay Current: Stay up-to-date with the latest trends and advancements in data engineering and related technologies

Requirements:

  • Extensive experience with Databricks, including ETL processes and data migration. Certification as Databricks Engineer is preferred
  • Proficiency in using DBT (Data Build Tool) for transforming data in the warehouse (Snowflake, Databricks)
  • Advanced programming skills in Python for data processing, ETL, and integration tasks
  • Experience with cloud platforms such as AWS, Azure, or GCP
  • Strong knowledge of SQL for querying and manipulating data
  • Proficiency in data modeling and schema design for relational and non-relational databases
  • Excellent problem-solving skills with the ability to analyze complex data issues and deliver effective solutions
  • Experience with RDBMS systems and transitioning data to modern cloud platforms
  • Strong interpersonal and communication skills to work effectively with cross-functional teams
  • English – Upper-Intermediate

Qualifications:

  • Experience with big data technologies such as Spark
  • Knowledge of data governance frameworks, data quality management practices, and data security principles
  • Knowledge of popular data standards and formats (e.g, Delta Lake, Iceberg, Parquet, JSON, XML, etc)
  • Knowledge of continuous integration and continuous deployment (CI/CD) practices for data pipelines

We offer:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
×

Easy apply


    or
    Refer a friend