Applications open until DECEMBER 28, 2025
Also for PwD
Job type : Full-time employee
Work model : Remote
Be part of Stefanini !
At Stefanini, we are more than 30,000 geniuses, connected from 41 countries, doing what we love and co-creating a better future.
As part of the Data Engineer team , you will be responsible for design, development and operations of large-scale data systems operating at petabytes scale. You will be focusing on real-time data pipelines, streaming analytics, distributed big data and machine learning infrastructure. You will interact with the engineers, product managers, BI developers and architects to provide scalable robust technical solutions.
Responsibilities and assignments
- Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built.
Requirements and qualifications
5-6 years of BIG data development experience.Demonstrates up-to-date expertise in Data Engineering, complex data pipeline development.Experience in agile models.Experience with Java, Python to write data pipelines and data processing layers.Experience in Airflow & Github.Experience in writing map-reduce jobs.Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.Highly Proficient in SQL.Experience with Cloud Technologies (GCP, Azure).Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid).Provides and supports the implementation and operations of the data pipelines and analytical solutions.Performance tuning experience of systems working with large data sets.Experience in REST API data service – Data Consumption.Retail experience is a huge plus.Candidates with previous experience at Walmart are required.Process stages
Step 1 : RegistrationStep 3 : EntrevistaStep 5 : OnboardingStep 6 : Hiring#J-18808-Ljbffr