In this position you will be part of a team of Software Engineers engaged in a large-scale effort to design, develop, implement and support Accounts Receivable and Finance applications. You will wear two hats as part of this position. Firstly, you will perform as our lead Data Architect for "everything data" within our domain. Secondly, you will perform as a Data Engineer on our squad working shoulder to shoulder with other data and software engineers to deliver solutions. I am NOT looking for an arm chair architect who draws charts. Instead, the ideal candidate who can perform as a leader in the area of data but will also be expected to roll up his/her sleeves and work hand in glove with his peers and clients. You must be adept at problem solving, workflow analysis, interpersonal relations, and have good oral/written communications skills.
This role focuses on the elements required to manage all aspects of data and information (both structured and unstructured) from business requirements to logical and physical design. This role spans the full information management lifecycle from acquisition, cleanse, transform, classify and storage to presentation, distribution, security, privacy and archiving. The key responsibilities of the Data Lead / Architect include databases (SQL, NoSQL, XML, JSON object, etc.), file systems and storage management, as well as document imaging, knowledge and content management, taxonomies and business intelligence. The Information Architect must be capable of designing centralized or distributed systems that both address the user's requirements and perform efficiently and effectively.
- Act as subject matter expert and keep the team up-to-date on standards and practices, technology trends in the area of data.
- Work with stakeholders, product owners and team members to define requirements and develop solutions. Ability to bridge the desires of the customer and the capabilities of the technology.
- Solid understanding of relational database design and concepts
- Write/develop complex/critical SQL code
- Experience with database performance tuning
- Develop solutions for data extraction, preparation, and loading of data from a variety of relational and non-relational sources into the Big Data environment.
- DBA on an end to end DevOps Squad supporting a highly complex Information Warehouse.
- Design scalable and maintainable solutions in traditional structured data platforms as well as NoSQL (Cloudant), Hadoop, Apache Spark, Apache Kafka
- Adopt and enforce best practices related to data ingestion and extraction of data across all platforms.
- Build data and analytics tools that will offer deeper insight into the pipeline, allowing for critical discoveries surrounding key performance indicators
- Develop business solutions in an Agile environment with focus on Continuous Integration and Continuous Delivery
- Develop a modern dimensional representation of our transactional data
- Utilize multiple approaches to facilitate Discovery: Modeling, Lean, Agile Design Thinking, etc.
- Strong work ethic and strong eagerness to learn
- Act as subject matter expert and keep the team up-to-date on standards and practices, technology trends in the area of big data
- Perform production support related activities such as troubleshooting and testing addressing defects as needed
- Ability and willingness to learn and apply new and emerging technologies in the data domain.
Must have the ability to work in the US without current/future need for IBM sponsorship
Above all, we are looking for applicants who will thrive in an open, energetic, flexible, fun-spirited, collaborative environment and desire creative freedom and an opportunity to work on high performing teams!
A day in the life at IBM
• Throughout the day, you will collaborate with your teammates and interact with our product owners - all while being based out of our North Carolina Agile Center in Research Triangle Park.
• Take advantage of our on-site exercise room.
• Work in an open environment where creativity is welcome and encouraged.
Staying relevant to emerging trends in areas related to Big Data, Cognitive, Analytics, Cloud, Agile, Continuous Integration, Continuous Delivery and more ...
Required Technical and Professional Expertise
- Bachelor's degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics or a related field
- 5+ years of DB2, SQL experience with large databases, coding, indexing, tuning, optimizing and best practices
- 5+ years of experience in architecting production-grade application data marts / data warehouses.
- Strong knowledge of Linux/AIX shell scripting
- Knowledge of XML/JSON
- 1-2 years of experience with NoSQL offerings such as CouchDB & MongoDB
- Experience with Big Data Concepts
- 3+ years of experience in one or more ETL Tools with an understanding of best practices for building and designing ETL jobs.
- Experience implementing best practices for building an Enterprise Data Strategy
- Strong background in Database Modeling and Modeling Tools
- Experience with GITHUB & ZENHUB
- Experience working on Agile Development Team
- Excellent written and verbal communication skills for technical writing and client presentations.
Preferred Tech and Prof Experience
- Experience using Hadoop, Spark and Elasticsearch
- Experience with Datastage ETL
IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Back to top