We are seeking AWS BigData Engineer for our Information Management practice. The role is involved in designing and buiding the AWS BigData platform from scratch. The role builds data management systems that combine core data sources into data lakes or other accessible structures to support integrations, reporting and analytical systems. The ideal candidate has prior experience working in big data, AWS, experience with Hadoop, and deep experience in traditional relational databases, and data integration
Duties and Responsibilities:
- Build and maintain AWS BigData solution.
- Perform data conversion, data imports, data exports and transformations.
- Design and implement processes to ensure data integrity and standardization.
- Continuously improve the Hadoop environment to ensure they are highly performant and provide an optimal end user experience.
- Implements data reliability, efficiency, and quality improvements.
- Ability to bring complex concepts into the organization and mentor others.
- Demonstrated ability to learn independently as well as from others. Push for improvement by bringing new ideas into the organization.
- Hadoop experience, familiarity with HDFS/Hive in particular
- Python (PySpark a plus) and Java
- AWS experience, especially EC2, S3, and EMR (Cloudformation/other infrastructure scripting a plus)
- Bash scripting
- Test-driven development (unit testing)
- RDBMS/ETL experience
- Streaming data (Kafka/Kinesis/etc.)
- CI/CD, Docker
- ETL orchestration (Airflow/Luigi)
- Flexible hours approach
- Competitive salary and compensation package
- 18 business days of payable annual free-time
- 10 days of paid sick leave
- Personal education budget
- Membership of the IT-BPO club card, loyalty program.
- Gym, Masseur, and Doctor
- Relax zones
- Corporate events and outstanding parties
- Team buildings
- Anniversary presents