We are looking for a software engineer who is responsible for designing and implementing data pipelines at Big Data scale.
- Implement parsers and validators for new Log sources
- Implement ETL transformers to reformat and enhance the data
- Implement ETL correlators to update the data from multiple data sources
- Work on tools and APIs to visualize the backend data
- Troubleshoot performance and data related problems
- Work with the Analytics team in defining the schema for new data sources
Education and Experience Required:
- Bachelor's or Master's degree in Computer Science, or equivalent.
- Typically 4 years experience.
Knowledge and Skills:
- 2-4 years java and/or Python development experience
- Experience working with Hadoop or Big Data (HDFS, Parquet, HBASE)
- Experience working with Large scale databases like Cassandra
- Experience working with Map Reduce or Spark, ElasticSearch, Kafka
- Experience working with Databases like Postgres, SQL
Hewlett Packard Enterprise is EEO F/M/Protected Veteran/ Individual with Disabilities.
HPE will comply with all applicable laws related to the use of arrest and conviction records, including the San Francisco Fair Chance Ordinance and similar laws and will consider for employment qualified applicants with criminal histories.
Meet Some of Hewlett Packard Enterprise's Employees
Senior Manager, Community Engagement
Rebecca is responsible for educating and inspiring HPE employees to become involved in the communities around them, then ensuring they’re each recognized for their contributions.
Back to top