Position: Database Architect (NoSQL and Relational)
WHO ARE WE?
IgnitionOne is a digital marketing solutions company providing world-class proprietary technology and expert services to improve digital marketing performance. IgnitionOne’s integrated Digital Marketing Suite (DMS) helps marketers centralize, manage and optimize digital media, understand cross-channel attribution while helping to optimize conversions on a marketer’s website.
We are looking for an experienced Database Architect to architect the physical implementations of IgnitionOne’s large data problems. The candidate will be working to provide performant, reliable data systems for managing the ingestion, processing, and storage of billions of events per day. Our platform tracks real-time events and integrates with hundreds of data partners globally. Building a highly scalable, low latency, high throughput data pipeline to support the processing and aggregation of these global data sources is fundamental to the value we deliver to our clients. Our work environment is top-notch and if you are passionate about building maintainable software at Internet scale, this is the place for you.
- Be the IgnitionOne expert in database technologies, both NoSQL and Relational, and their best-use applications, performance, scalability, and other operational considerations
- Collaborate with the Architecture, Engineering, and DevOps teams in determining the appropriate repositories and associated technologies to solve BigData use cases.
- Apply expert experience in materializing logical data organizations to the associated physical organizations that leverage a given repository’s strengths by understanding both the data problem, business use cases and applying that to the underlying architectures of the repositories to maximize storage utilization, performance and return on investment.
- Be a partner with DevOps in all operational considerations for healthy data stores- capacity planning and its step functions, monitoring beyond up/down to true user/application performance, proper backup, snapshot, replication and off-site data distribution snapshots based on business need and criticality of data.
- Be a partner with the development teams and evangelize data repository best practices to ensure proper, performant and consistent usage patterns against the Big Data repositories.
- Key collaborator in the Design and implement centralized Data Lake where needed to target business opportunities
- Lend expertise to evaluations of new and replacement data-related technologies such as at-rest repositories, ETL tools and reporting suites.
- Eye towards continuously improving performance and reliability of systems
- Ensure proper test coverage of any data pipelines to ensure data integrity between producers and consumers
- Produce appropriate technical documentation to guide Engineering teams best practice use of technologies
- Participate in peer code reviews
- Inform the future architecture of the Product by envisioning current and future data needs
- S. Computer Science or equivalent experience preferred - M.S. or Ph.D. a plus
- Experience in working with diverse distributed teams in a collaborative environment
- Experience as a mentor in database best practices
- Experience in designing in partnership with Operations the server needs for large scale data systems.
- Deep experience in the use and internals of high-performing and large NoSQL distributed systems such as Cassandra, Aerospike, Redshift, or Druid
- Deep experience in the use and internals Big Data platforms such as Hadoop, Redshift, Snowflake
- Experience working with high-volume, large-scale data flows and ETL
- Experience with other large-scale enterprise relational data environments (SQL Server, Oracle, DB2) particularly in scaling and migration.
- Experience with Amazon Web Services offerings such as S3, Kinesis, and RDS
- Experience working in an Agile development environment
- Experience maintaining code assets in Git or similar distributed source control
- Experience in the online advertising industry a big plus
- Experience with using AWS and its tools-sets in general a plus
- Experience working with Messaging Systems such as Kafka, RabbitMQ, or Kinesis as repositories will be fed from streaming data as well as batch and files
- Experience working with stream processing frameworks such as Storm, Spark, or Samza
- Comfortable or proficient in *nix development
Back to top