Sr Staff Data Engineer - GE07DE
We're determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals - and to help others accomplish theirs, too. Join our team as we help shape the future.
Enterprise Data Services - Sales & Distribution IT is undergoing a transformation to the Cloud, and we are looking for an enthusiastic Sr. Staff Data Engineer to join our team. In this role, you will be leading the efforts to develop, enhance, and support new and existing ETL data pipelines, ingestions, and storage across various medium and large sized projects concurrently. This is an individual contributor position responsible for expanding and optimizing data pipeline and product architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
Want more jobs like this?
Get jobs in Charlotte, NC delivered to your inbox every week.
You will work on implementing complex data projects, focusing on collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into insights using multiple platforms. You will support our software developers, data architects, analysts and data scientists on vital initiatives and ensure optimal data delivery architecture. You will lead the design of vital components and suggest/implement new frameworks and tools based on industry and technological trends and advances. You will also consult with process owners in reviewing, interpreting & developing systems in accordance with user requirements and ensuring code quality through proper documentation, best practices, and code reviews. Working in our collaborative environment, you will further develop your skills while also acting as a mentor and technical guide to other data engineers.
This role will have a Hybrid work schedule, with the expectation of working in an office location (Hartford, CT and Charlotte, NC) 3 days a week (Tuesday through Thursday).
Job responsibilities:
- Demonstrate self-initiation and ability to learn new technology stacks as per the defined architecture
- Design, develop, and maintain data pipelines for extraction, transformation, and loading processes using ETL tools and R/Python in Cloud and Oracle/SQL Server environments to support data analytics and reporting efforts.
- Utilize modern cloud ETL tools and Big Data Platforms like Hadoop, etc to develop complex data assets that support organizational decision making via prototyping, data discovery, profiling, etc...
- Prototype high impact innovations, catering to changing business needs, by leveraging new technologies (AWS - Cloud and Big Data)
- Act as a Subject Matter expert on existing data assets through root cause analysis and resolution of any business/technical questions
- You will develop and implement Proof of Concepts (PoCs), generic frameworks, and real-time pipelines using SQL, Amazon Web Services (AWS), ETL tools, Snowflake, and Python.
- You will introduce automation of software application development using Continuous Integration/Continuous Delivery (CI/CD) methodologies.
- Support documentation, metadata, and data visualization of the assets
- Participate in the development of the implementation plan as a subject matter expert
- Develop and support the migration activities of reporting assets from SQL Server / .Net stack to Snowflake / Tableau / ThoughtSpot.
- Collaborate with the Enterprise Data teams to provide user acceptance testing, maintain data quality, and advance the technical toolset.
- Coordinate activities with cross-functional IT unit stakeholders (e.g., database, operations, telecommunications, technical support, business etc.)
- Researches and evaluates alternative solutions and recommends the most efficient and cost-effective solution for the systems design
- Review sprint plan, disaster recovery plans, evaluate system readiness and prepare periodic activity and progress reports for the team
- Develop and maintain data analytics tools and frameworks to support Artificial Intelligence & Machine Learning use cases.
- Stay up to date with emerging data technologies and industry best practices.
- Demonstrate a strong willingness to explore and leverage new tools and technologies as per project needs.
- Confident, self-starter capable of independently driving multiple concurrent projects to completion.
Knowledge, Skills, and Abilities
- Strong Technical Knowledge (Cloud data pipelines and data consumption products)
- Leader and a team player with transformation mindset.
- Ability to lead successfully in a lean, agile, and fast-paced organization, leveraging Scaled Agile principles and ways of working
- Ability to understand and align deliverables to the departmental and organization strategies and objectives.
- Provide thought-leadership to dynamic and collaborative teams, demonstrating excellent interpersonal skills and time management capabilities
- Planning, organization, compromise and risk mitigation
- Industry awareness about evolving design patterns for Cloud and Software-as-a-service provider based solution integration
- Critical thinking and creativity for optimized solutions
- Guides team to mature Code quality management, FinOps principles, automated testing, and environment management practices to deliver incremental customer value
- Advanced knowledge and understanding of DevOps technology stack and standard
- Experience with any of the reporting tools -Tableau, Business Objects, Micro-Strategy
- Insurance & Financial services domain knowledge
- Knowledge of AWS cloud technology - Code Build, Code Pipeline, Containers
- Awareness of AI, ML, Data science practice
- .Net experience a huge plus.
Qualifications:
- Bachelor's degree in computer science, Data Engineering, or a related field
- Minimum 5 years of experience as a Data Engineer, with a strong track record in quantitative, analytical and data manipulation skills
- Experience with ETL tools (Informatica, IDMC, Talend etc.)
- Unit, interface and end user testing concepts and tooling (functional & non-functional)
- Knowledge on any Cloud tech stack AWS, Azure etc.
- Advanced knowledge of SQL as it pertains to data, analytics, and reporting on any relational database Oracle, Snowflake Server, etc.
- Experience with any scripting or programing language - Python, JavaScript etc.
- Experience with Test automation & DevOps tools
- Knowledge of Agile Scrum/SAFE methodology
- Effectively use collaboration tools like Rally, Jira etc.
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$135,040 - $202,560
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Culture & Employee Insights | Diversity, Equity and Inclusion | Benefits