About the Role
Work with cross-functional teams to build and maintain data pipelines and project infrastructure.
Manage end-to-end data engineering tasks, including data ingestion, transformation, and integration with the data lake Ensure data quality and perform data transformations within the Cloud ecosystems.
Support data related initiatives, particularly involving AWS, GCP, Azure, Snowflake cloud services and popular marketing and analytics platforms
Requirements
Minimum 2+ years of experience with ADF, Databricks, PySpark, SQL.
Hands on experience on AWS, GCP, Azure, Snowflake.
Strong conceptual understanding of data warehouse.
Understanding of data modelling and SQL.
Experience with building and deploying ETL/ELT pipelines.
DevOps integration of data pipelines.
Hands-on experience with modern data storage systems (e.g. ADLS, Synapse).
Experience with DevOps, Continuous Delivery.
Good conversion of high-level business & technical requirements into technical specs.
Feeling comfortable in using Azure cloud technologies.
Customer-centric, passionate about delivering great digital products and services.
Demonstrating true engineering craftsmanship mindset.
Passionate about continuous improvement, collaboration, and great teams.
Strong problem-solving skills coupled with good communication skills.
About the Company
At Neuroverge Cloud Systems.ai, we are at the forefront of innovation, delivering cutting-edge artificial intelligence and cloud computing solutions that transform industries. Our dynamic and collaborative work environment fosters creativity and growth, allowing individuals to thrive professionally while maintaining a healthy work-life balance. We are committed to continuous learning, providing our team with exceptional opportunities to develop new skills and expand their expertise. Join us and be part of a forward-thinking company that values innovation, growth, and your success.