Data Engineer – Azure Cloud

E I DuPont India Pvt Ltd

  • Gurgaon, Haryana
Data Engineer – Azure Cloud
Job Posted : Jul 8th, 2021

Job Description

The DuPont Water and Protection Analytics team provides analytical solutions and business data insights to a wide range of functions including sales, supply chain, operations and finance. We are looking to expand our capacity by adding a Data Engineer. In this role, you would support all aspects of data management from collecting, processing and loading data to working on projects to increase the level of automation or to improve data quality and to drive value extraction.

Responsibilities:-

  • Design and build Azure cloud solutions based on Azure Data Lake Storage (ADLS), Data bricks and Azure Data Factory (ADF).
  • Build and maintain a Lakehouse architecture in ADLS using Data bricks.
  • Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc.
  • Work with DevOps team to deploy solutions in production environments.
  • Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors.
  • Participate as a full member of the global Analytics team, providing solutions for and insights into data related items.
  • Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members.
  • Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations.

Qualifications And Experiences:-

  • Strong English language skills (written and spoken).
  • Bachelor’s Degree in math, science, engineering, IT or related area.
  • Strong knowledge of data concepts and tools.
  • Fluent and effective communication skills.
  • Ability to be an effective team member with remote colleagues spread across the world.
  • Fluency in SQL and Python.
  • Experience with cloud computing, preferably Azure (incl. ADF and ADLS).
  • Experience with distributed systems (e.g. Apache Spark or Data bricks).
  • Experience with version control (e.g. Git).

Preference will be given to candidates with these skills:-

  • Delta lake and Lakehouse architecture
  • Pandas and PySpark
  • Exposure to DevOps workflows
  • Power BI
  • ML pipelines (e.g. scikit-learn)

Location:-

  • Gurgaon, Haryana.

Company Overview:-

      SUBSCRIBE NEWS LETTER
Scroll