The opportunity:
As a Data Engineer, you will be part of Operation Center, India (INOPC-PG), aiming to develop a global value chain, where key business activities, resources, and expertise are shared across geographic boundaries to optimize value for Hitachi Energy customers across markets. As part of Transformers BU, we provide high-quality engineering and Technology to Hitachi Energy world. This is an important step from Hitachi Energy's Global Footprint strategy.
How you'll make an impact:
- Display technical expertise in data analytics focusing on a team of diversified technical competencies.
- Build and maintain accurate and scalable data pipeline and infrastructure such as SQL Warehouse, Data Lakes, etc. using Cloud platforms (e.g.: MS Azure, Databricks).
- Proactively work with business stakeholders to understand data lineage, definitions, and methods of data extraction.
- Write production-grade SQL and PySpark code to create data architecture.
- Consolidate SQL databases from multiple sources, data cleaning, and manipulation in preparation for analytics and machine learning.
- Use data visualization tools such as Power BI to create professional quality dashboards and reports.
- Write good quality documentation for data processing for different projects to ensure reproducibility.
- Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines.
- Living Hitachi Energy's core values safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business.
Want more jobs like this?
Get Data and Analytics jobs in Chennai, India delivered to your inbox every week.
Your Background:
- BE / B.Tech in Computer Science, Data Science, or related discipline and at least 5 years of related working experience.
- 5 years of data engineering experience, with understanding lake house architecture, data integration framework, ETL/ELT pipeline, orchestration/monitoring, star schema data modeling.
- 5 years of experience with Python/PySpark and SQL.( Proficient in PySpark, Python, and Spark SQL).
- 2-3 years of hands-on data engineering experience using Databricks as the main tool (meaning >60% of their time is using Databricks instead of just occasionally).
- 2-3 years of hands-on experience with different Databricks components (DLT, workflow, Unity catalog, SQL warehouse, CI/CD) in addition to using notebooks.
- Experience in Microsoft Power BI.
- Proficiency in both spoken & written English language is required.
Apply now
Current employees apply here