- Assist in building and maintaining ETL pipelines to collect, process, and store data from various sources.
- Work with relational and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB).
- Learn how to deploy and monitor data systems on cloud platforms (AWS, GCP, Azure).
- Write scripts and tools for data cleaning and standardization.
- Collaborate with the team on researching and testing solutions for data lakes and data warehouses.
Qualifications
- Currently 3rd/4th-year student or recent graduate in Computer Science, Data Engineering, IT, or related fields.
- Foundational knowledge of SQL and Python (or Java/Scala).
- Understanding of ETL concepts, Data Pipelines, and API Integration.
- Bonus: exposure to cloud services (AWS/GCP/Azure) or tools like Apache Spark, Kafka, Airflow.
- Good English reading and comprehension skills.
What We Offer
- Mentorship and hands-on guidance from Senior Data Engineers.
- Valuable experience working with real-world datasets and microservice systems.
- Internship allowance + potential for full-time employment after internship.
- A dynamic and supportive environment that fosters continuous learning and growth.
Transform your ideas into reality with our services. Get started today!
Our team will contact you within 24 hours.
Ready to make an impact?
If you’re excited by this opportunity, we want to hear from you! Send your CV, along with your portfolio or GitHub profile (if available), to [email protected]. We look forward to learning more about you!