Middle/Senior Data Engineer (#1793)

Colombia, Ukraine
Work type:
Technical Level:
Job Category:
Software Development
a global biopharmaceutical company

We are seeking a proactive Middle/Senior Data Engineer to join our vibrant team. As a Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

Key Responsibilities:

  • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
  • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
  • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
  • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
  • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
  • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

Tools and skills you will use in this role:

  • Palantir Foundry
  • Python
  • PySpark
  • SQL
  • TypeScript


  • 3+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry.
  • Strong proficiency in Python and PySpark.
  • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.).
  • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow).
  • Expertise in data modeling, data warehousing, and ETL/ELT concepts.
  • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.).
  • Proficiency in containerization technologies (e.g., Docker, Kubernetes).
  • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities.
  • Strong communication and teamwork abilities.
  • Understanding of data security and privacy best practices.
  • Strong mathematical, statistical, and algorithmic skills.

Nice to have:

  • Certification in Cloud platforms, or related areas.
  • Experience with search engine Apache Lucene, Webservice Rest API.
  • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry.
  • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous.
  • Previous experience working with JavaScript and TypeScript.

We offer:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

Easy apply

    Refer a friend