We have this new crazy approach where Engineers and their interests come first. Your wants, your needs, your professional experience, your choice! We aspire to reach everyone and connect them to top notch projects.
We are looking for a Big Data Engineer whose general mission will be:
- Be a part in our project of building a centralized data source (DataSwamp & DataLake) and the company's Data Warehouse that will feed from it, consolidating data from our different sites.
- Implement, automate & optimize data pipelines, both batch daily extractions and realtime streaming, to feed our data source in S3.
- Provide the department with tools to query our data repository in an easy & fast way.
- Amazon Web Services. Specifically, deploying EMR clusters, ingesting data from/to S3 FileSystem, querying Redshift.
- Good programming skills with Python & Java.
- Knowledge of SQL querying. Expertise in multiple DBs (PostgreSQL, SQLServer...) & query optimization will be a plus.
- Good scripting skills to request API REST datasets, schedule queries.
- Big Data-related tools: Spark, Kafka consumers, Sqoop... Expertise in Spark transformations will be a plus.
- Continuous Integration & automation (Jenkins).
- Version control (GitHub).
- Having been involved in other Big Data and/or Business Intelligence projects.
- VMs (Docker).
- Data visualization tools & dashboards: Zeppelin, Qlik.
- Scrum & Jira tool.
The Goodies for you
- Permanent contract, Full time position.
- Working with the best professionals in a leading company.
- Opportunities for development and growth within the company.