Come change the way world processes streaming data
The Amazon Web Services (AWS) Kinesis Data Analytics (KDA) team is looking for Engineers to work on the Apache Flink framework and who are looking to learn and build distributed stream processing engines. We are looking for builders who are enthusiastic about data streaming and excited about contributing to open source.
Real-time data processing from a stream needs substantial investments from customers in writing the application and maintaining the necessary infrastructure. KDA service provides customers with fully managed stream processing platform leveraging Apache Flink framework where customers can develop their applications using SQL or Java. With the service all that customers need to do is provide the application code that needs to be run containing the business logic to process the stream and service takes care of providing building blocks/abstractions such as processing windows, execution semantics, checkpoints and infrastructure capabilities such as elasticity, fail-over, etc. eliminating complexity of stream processing.
As a member of the KDA team you will:
• work on making improvements to the stream processing engine, Apache Flink, to make the KDA service the defacto service to run stream processing application
• improve the engine and contribute back to open source; upstream compatibility is a core tenet of KDA service
• work on improving efficiency and availability of the engine, adding ease-of-use features and push the envelope of stream processing
• write quality, reusable code for highly scalable and reliable cloud-based services
• be a champion for operational excellence by Insisting on the Highest Standards
• write code that continuously improves service reliability and availability
The ideal candidate has experience working on large-scale systems, enjoys solving complex software problems, and possesses analytical, design and problem-solving skills. While not necessary having experience with data processing technologies such as Apache Flink, Apache Spark, Apache Storm, Hadoop frameworks is a plus.
This position involves on-call responsibilities which is typically 1 week every 2 months. We do not like getting paged in the middle of the night or on the weekend, so we work to ensure that our systems are fault tolerant. When we are paged, we work together to resolve the root cause so that we don't get paged for the same issue twice.
We at AWS value individual expression, respect different opinions, and work together to create a culture where each of us is able to contribute fully. Our unique backgrounds and perspectives strengthen our ability to achieve Amazon's mission of being Earth's most customer-centric company.
Come join us to make stream processing main stream for our customers.
• Bachelor's degree in Computer Science, Electrical Engineering, or similar; or equivalent experience
• Several years experience in system software development and delivery
• Advanced software engineering skills, including the ability to write expert-level, maintainable, and robust code in C++, Java, C#, or similar
• Proven Computer Science fundamentals in algorithms and data structures
• Working experience with high concurrency, multithreaded and distributed systems
• Experience working on distributed big-data processing engines such as Apache Flink, Apache Spark etc. is a big plus
• Experience building extremely high volume and highly scalable web services
• Experience with distributed systems, consistent hashing, distributed locking, replication, and load balancing
• Strong communication skills and ability to work effectively on shared projects with other developers
• Ability to mentor junior engineers and influence technical roadmap
Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We value your passion to discover, invent, simplify and build. Our salaries are negotiable.