Skip to main contentA logo with &quat;the muse&quat; in dark blue text.
State Street

Azure DevOps Engineer, Assistant Vice President, Hybrid

Princeton, NJ

Job Description

Who we are looking for.

The candidate must have 5+ years of experience in IT or relevant industry. The responsibility of this position is to design, implement, and maintain infrastructure and tools, with the main objective of automating the provisioning and monitoring of DevOps and Azure infrastructure. This role can be performed in a hybrid model, where you can balance work from home and office to match your needs and role requirements.

What you will be responsible for

As Azure DevOps Engineer you will

  • Collaborate across a variety of teams to enable our Data Platform needs and design, implement, and maintain a secure and scalable infrastructure platform spanning across AWS/Azure and our Data Center
  • Collaborating with developers and other team members to identify and implement automated build, test, and deployment processes.
  • Troubleshooting issues with CI/CD pipelines and identifying areas for improvement in the process.
  • Ensuring the security and compliance of the CI/CD pipelines and infrastructure.
  • Developing and maintaining scripts and tools to automate the CI/CD process.
  • Use Infrastructure as Code (IaC) and containerization to create immutable reproducible deployments and establish best practices to scale that Infrastructure as Code (IaC) project in a maintainable way.
  • Own and ensure that internal and external SLA's meet and exceed expectations, System centric KPIs are continuously monitored.
  • Create tools for automating deployment, monitoring, alerting and operations of the overall platform and establish best practices for CI/CD environments and methodologies such as GitOps
  • Analyze our AWS/Azure Resource usage to optimize for a balance of performance vs cost.
  • Work on our data lake, data warehouse, and stream processing systems to create a unified query engine, multi-model databases, analytics extracts and reports, as well as dashboard and visualizations.
  • Design and build systems for high availability, high throughput, data consistency, security, and end user privacy, defining our next generation of data analytics tooling.
  • You will mentor other engineers and promote software engineering best practices across the organization designing systems with monitoring, auditing, reliability, and security at their core.
  • Come up with solutions for scaling data systems for various business needs and collaborate in a dynamic and consultative environment.

Want more jobs like this?

Get Software Engineering jobs delivered to your inbox every week.

Select a location
By signing up, you agree to our Terms of Service & Privacy Policy.

What we value

These skills will help you succeed in this role.

  • A deep understanding of CI/CD tools and a strong desire to help teams release frequently to production with a focus on creating reliable high-quality results.
  • Experience with globally distributed log & event processing systems with data mesh and data federation as the architectural core is highly desirable.
  • Expertise in DevOps, DevSecOps and emergent experience with DataSecOps and Data Governance practices - deep experience with managing and scaling container-based infrastructure-as-code technologies from the CNCF and related orbits
  • Experience designing and building data warehouse, data lake or lake house using batch, streaming, lambda, and data mesh solutions and with improving efficiency, scalability, and stability of system.
  • Knowledge of DBT, Airflow, Ansible, Terraform, Argo, Helm, or other data pipeline systems; ideally experience building and maintaining a data warehouse and understanding of simple data science workflows and terminology.
  • Expertise with either AWS, Azure, and Services/Tooling such as or like: Terraform, Packer, Docker, Kubernetes, Helm, Prometheus, Grafana, Fluent Bit, Istio (Service Mesh)
  • Strong background integrating continuous delivery (CD) with Kubernetes using tools such as Argo, GitLab, Spinnaker and strong Git experience, development methodologies, trunk-based develop vs. git flow, Helm etc.
  • Strong end-to-end ownership and a good sense of urgency to enable proper self-prioritization.
  • Maintain live services by measuring and monitoring availability, latency, and overall system.
  • Clear understanding of Storage, Compute & Data services offered by cloud provider.
  • Clear understanding on cloud n/w model and n/w security
  • Understanding of load balancer, Firewall, DNS, on-prem/cloud connectivity including IP, Subnet, TCP/IP, load balancing, etc
  • Understanding of Cloud architecture pillars - Security, Reliability, Cost Optimization, Performance, Operations, etc

Education & Preferred Qualifications

  • Bachelor's Degree level qualification in a computer or IT related subject
  • 5+ years of DevOps experience including cloud CLI(Azure CLI) and SDK offered by Azure or AWS
  • 5+ years of Cloud IaC with deep expertise in Terraform/CloudFormation & Ansible/Salt deployment.
  • 3+ years of Kubernetes (AKS or EKS ) experience focused on DevOps.
  • Practical experience with Data Engineering and the accompanying DevOps & DataOps workflows

Additional requirements

  • Certification on Azure and AWS
  • Experience on Terragrunt and Hashicorp Certified Terraform Associate
  • Familiarity with SDLC and Agile methodologies
  • Ability to multi-task, meet aggressive timelines and have strong work ethics.
  • Experience of working in the financial industry

Salary Range:
$100,000 - $160,000 Annual

The range quoted above applies to the role in the primary location specified. If the candidate would ultimately work outside of the primary location above, the applicable range could differ.

Client-provided location(s): Princeton, NJ, USA; Quincy, MA, USA
Job ID: StateStreet-R-737217
Employment Type: Full Time