Middle/Senior Big Data Engineer (with Palantir Foundry) (#1858)

Ukraine
Work type:
Office/Remote
Technical Level:
Senior
Job Category:
Software Development
Project:
Ringier

We are looking for a proactive Middle+/Senior Big Data Engineer to join our vibrant team! You will play a critical role in designing, developing, and maintaining sophisticated data pipelines, and using Foundry tools such as Ontology, Pipeline Builder, Code Repositories, etc. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

Tools and skills you will use in this role: Palantir Foundry, Python, PySpark, SQL, basic TypeScript.

Responsibilities:

  • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
  • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
  • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
  • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
  • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
  • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.
  • Be eager to get familiar with new tools and technologies

 

Requirements:

  • 4+ years of experience in data engineering;
  • Strong proficiency in Python and PySpark;
  • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
  • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
  • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
  • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
  • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
  • Strong communication and teamwork abilities;
  • Understanding of data security and privacy best practices;
  • Strong mathematical, statistical, and algorithmic skills.

 

Nice to have:

  • Certification in Cloud platforms, or related areas;
  • OpenAI/any other LLM API experience 
  • Familiarity with containerization technologies (e.g., Docker, Kubernetes);
  • Basic HTTP understanding to make API calls;
  • Familiarity with Palantir Foundry;
  • Previous work or academic experience with JavaScript / TypeScript

 

 

We offer:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
×

Easy apply


    or
    Refer a friend