PySpark Data Engineer
- Seattle, WA
This position is a remote opportunity.
In a nutshell...
Our Advanced Analytics team is looking for a PySpark Data Engineer to design and develop a scalable data processing infrastructure for our client in the utilities industry. You’ll work closely with our team of analysts, TPMs, and data scientists to enable data-driven decision making and build solutions that have a real-world impact on public safety, customer experience, and environmental protection.
At the same time, you’ll be joining a five-time Best Company to Work For, where super-smart, talented people come together to do outstanding work—and have a heck of a lot of fun while they’re at it. Because we’re a consulting shop with a diverse clientele, you can count on a steady stream of opportunities to work with cutting-edge technologies and different types of data on projects that make a real difference.
The Logic20/20 Advanced Analytics team is where rock stars in data engineering, data science, and visual analytics join forces to build simple solutions for complex data problems. We make it look like magic, but for us, it’s all in a day’s work. As part of our team, you’ll collaborate on projects that help clients spin their data into a high-performance asset, all while enjoying the company of kindred spirits who are as committed to your success as you are. And when you’re ready to level up in your career, you’ll have access to the training, the project opportunities, and the mentorship to get you where you want to go.
“We build an environment where we really operate as one team, building up each other’s careers and capabilities.” – Adam Cornille, Director, Advanced Analytics
You’re the perfect person for the job if you’re a big-data engineering ninja with …
- A nose for uncovering business needs and pain points in partnership with executive management
- A talent for communicating engineering concepts to non-techy business stakeholders
- A passion for building large-scale machine learning pipelines
- A knack for developing and iterating solutions at record speed
What you’ll be doing
- Joining forces with internal and external teams to understand the client’s business needs
- Designing and developing a scalable data processing infrastructure
- Helping the client better understand their core needs, with a keen awareness of technical limitations.
- Strong understanding of high-performance ETL development with Python
- 5+ years of data engineering, 3+ years of PySpark
- Data engineering implementation experience with some of the following technologies:
- Spark and PySpark
- Advanced engineering skills with Python
- Comfortable working with very large data
- Demonstrated ability to identify business and technical impacts of user requirements and incorporate them into the project schedule
- Strong communication and interpersonal skills
- Ability to work both independently and as part of a team
- Work across team to solve technical roadblocks for our customers.
- An undergraduate degree in technology or business is required
We'd be super impressed if you had...
- Experience building data and computational systems that support machine learning
- Knowledge of AWS services
- Experience with modern software delivery practices, including source control, testing, continuous delivery
- Experience delivering product with Agile methodologies
- Experience with streaming data in Spark
More reasons to work with Logic20/20
- Recognition by Seattle Business as a Best Company to Work For—five years in a row
- Benefits including medical, dental, vision, life insurance, EAP, 401(k), and more
- Training, certification, career management, and mentorship programs to keep your brilliant career moving forward
- Flex time and remote-work options, depending on the project
- Paid time off to enjoy your life outside work (yes, we know you have one)
- Company-sponsored volunteer events where you can give back while having fun
Back to top