Senior Data Engineer

We are seeking a talented, ambitious Data Engineer to support the Enterprise Information Management team within Global Business and Technology Solutions at Prudential.

The selected candidate will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The incumbent should be an experienced data analyst, data pipeline builder and data wrangler who enjoys analyzing data, building and optimizing data systems from the ground up. This is an exciting opportunity which will involve working closely with data architects to implement the overall data strategy.


  • Building and maintaining data-intensive applications utilizing modern front-end and back-end technologies to deliver value to our businesses
  • Creating and maintaining optimal data pipeline architecture assembling large, complex data sets that meet functional / non-functional business requirements.
  • Identifying, designing and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • Support building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Relational, NoSQL and Hadoop technologies.
  • Building and maintain data services and data consumption tools that utilize the data pipeline to deliver actionable insights into key business performance metrics.
  • Creating data visualizations for analytics and assisting other team members with using our data products.
  • Working with partners including the Architecture, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Preparing and maintaining physical models and implementation-level details that affect the continuum of disciplines involved in the architecture, design, implementation and management of enterprise information.
  • Collaborating with other teams to design, develop data tools that support both operations and data application use cases.
  • Analyzing large data sets using components from the Hadoop ecosystem.
  • Evaluating big data technologies and prototype solutions to improve our data processing architecture.
  • Implement the overall data integration framework for key data warehouse components

  • Bachelor's degree in Computer Science or other related area or equivalent relevant experience
  • Strong analytic skills related to working with structured and unstructured datasets
  • Experience building processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience with big data technologies such as Hadoop, Hive, Impala, Hbase, PySpark, PIG, SQOOP, HDFS, Solr, Apache and related cloud based services
  • Minimum 5-year experience in query language skills including SQL, HiveQL and experience working with Relational, NoSQL & Hadoop systems
  • Experience with object-oriented/object function scripting languages: Python, Java, etc. C would be an advantage.
  • Experience with Denodo Data Virtualization Platform and VQL
  • Experience leveraging Cloud services for scalable solutions
  • Familiar with designing, building, securing, and implementing micro service frameworks and components
  • Experience with data quality standards and controls
  • Working skills with back-end technologies: Node.js, Python, Java; front end technologies: HTML, CSS, JavaScript; data visualization tools: Tableau, Power BI
  • Experience with ETL tools such as Informatica is preferred
  • Hands-on experience implementing BI or data warehouse solutions preferred
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
  • Proficiency with agile or lean development practices
  • Excellent written and verbal communication skills

Back to top