Senior Data Engineer
- Milwaukee, WI
Baker Tilly Virchow Krause, LLP (Baker Tilly) is a leading advisory, tax and assurance firm whose specialized professionals guide clients through an ever-changing business world, helping them win now and anticipate tomorrow. Headquartered in Chicago, Baker Tilly, and its affiliated entities, have operations in North America, South America, Europe, Asia and Australia. Baker Tilly is an independent member of Baker Tilly International, a worldwide network of independent accounting and business advisory firms in 145 territories, with 34,700 professionals. The combined worldwide revenue of independent member firms is $3.6 billion. Visit bakertilly.com or join the conversation on LinkedIn, Facebook and Twitter.
It's an exciting time to join Baker Tilly!
Baker Tilly's principles of integrity, passion and stewardship define us as an organization and an employer. With offices consistently earning 'best place to work' honors and as a top-ranked firm, Baker Tilly recognizes that our approach, strategy and culture are driven by our people. Their focus and commitment have been fundamental in getting us to where we are today and where we will go in the future. Based on the growth we are planning, Baker Tilly is actively recruiting bright, talented individuals who have a passion to succeed.
Due to the continued growth of our consulting practice, we are currently recruiting for a Senior Data Engineer to join our team. As a part of Baker Tilly Consulting, you will find that our global brand and entrepreneurial environment will give you the support you need to apply your industry and technical experience to build your career across a wide range of services to meet our client's most important needs. As a member of our team, you will also contribute to some of the most important activities in our firm which include operating and growing the business, serving the client, developing the best people, and shaping our culture.
Baker Tilly Annual Report 2018
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Work with data and analytics experts to strive for greater functionality in our data systems
- Bachelor's degree in Computer Science, Statistics, Informatics, Information Systems or another related field
- Minimum of five (5) years of experience in a Data Engineering
- Advanced working SQL knowledge and experience working with relational databases
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing 'big data' data pipelines, architectures and data sets
- Deep experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- Experience with relational SQL and NoSQL databases, including PostgreSQL and Cassandra
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing and highly scalable 'big data' data stores
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Strong project management and organizational skills required
- Experience supporting and working with cross-functional teams in a dynamic environment
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow preferred
Back to top