Staff Software Engineer, Enterprise Integration
1 week ago• Dallas, TX
Hi — We’re AppFolio. We’re innovators, changemakers, and collaborators. We’re more than just a software company — we’re pioneers in cloud and AI, delivering magical experiences that make our customers’ lives easier. We’re transforming how people do business in real estate, and we’re looking for engineers who want to design and build systems that scale with the business.
We’re seeking a Staff Software Engineer, Enterprise Integration to help design and build the core integration and API platforms that connect AppFolio’s products, data, and enterprise systems. This is a hands-on engineering role with significant architectural influence, focused on building robust, scalable, and secure software systems — not just configuring tools.
Want more jobs like this?
Get jobs in Dallas, TX delivered to your inbox every week.

This role is ideal for a senior engineer who enjoys solving complex distributed systems and integration problems, writing production-grade code, and shaping long-term platform architecture.
As a Staff Software Engineer, you will be a senior technical contributor responsible for designing, building, and operating cloud-native integration services and APIs. This is a hands-on software engineering role, focused on writing high-quality code and building distributed systems that integrate AppFolio’s products, data platforms, and enterprise applications.
You will work closely with product engineering, data, and enterprise teams to deliver API-first and event-driven architectures, with AWS as the primary execution platform.
Responsibilities
- Design, build, and operate cloud-native microservices on AWS to support enterprise integrations and APIs.
- Write high-quality, production-grade software in Java, Kotlin, and/or Python, applying proven software engineering patterns and best practices.
- Architect and implement event-driven and asynchronous systems using messaging and streaming platforms (e.g., Kafka or AWS-native equivalents).
- Apply distributed systems and integration patterns (e.g., idempotency, retries, backpressure, eventual consistency) to build resilient services.
- Design API-first services that expose well-defined domain capabilities for product, data, and enterprise consumers.
- Build and operate large-scale data exports and ingestion pipelines, supporting batch and near–real-time use cases.
- Integrate with external SaaS platforms (Salesforce, Zuora Billing/RevPro, NetSuite, etc.) using custom-built services, APIs, and events.
- Design systems with strong emphasis on observability, fault tolerance, security, and operational excellence.
- Participate in architecture and design reviews, influencing microservices and event-driven architecture standards.
- Design and manage data persistence layers, selecting appropriate database technologies based on access patterns and scale.
- Mentor engineers and elevate practices in system design, code quality, and reliability engineering.
- Drive DevOps best practices including CI/CD, infrastructure as code, and automated testing
Must-Have Qualifications
- Bachelor’s degree in Computer Science or a related field (Master’s preferred).
- 8+ years of professional software engineering experience, primarily focused on backend and distributed systems.
- Expert-level proficiency in Java, Kotlin, and/or Python
- Strong experience designing and operating microservices-based architectures in production.
- Deep hands-on experience with event-driven systems and asynchronous processing.
- Strong experience with queuing and messaging systems (e.g., message queues, pub/sub, streaming).
- Strong hands-on experience designing and building AWS-based systems at scale.
- Experience working with relational and NoSQL database technologies, including schema design, data modeling, and performance optimization.
- Solid understanding of software design patterns, integration patterns, and distributed data consistency models.
- Experience designing and operating high-volume data export and ingestion workflows.
- Proven experience delivering complex systems using Agile and modern SDLC practices.
- Strong communication and cross-functional collaboration skills.
Preferred Qualifications
- Experience with Kafka or equivalent streaming platforms.
- Experience with AWS-native messaging and data services (e.g., SNS/SQS, EventBridge, streaming, object storage).
- Experience building internal platforms or shared integration frameworks.
- Familiarity with data consistency, reconciliation, and recovery strategies across distributed systems.
- AWS certifications (Solutions Architect, Developer, or equivalent).
Location
Find out more about our locations by visiting our site.
Compensation & Benefits
The compensation that we reasonably expect to pay for this role is: base pay. The actual compensation for this role will be determined by a variety of factors, including but not limited to the candidate’s skills, education, experience, and internal equity.
Please note that compensation is just one aspect of a comprehensive Total Rewards package. The compensation range listed here does not include additional benefits or any discretionary bonuses you may be eligible for based on your role and/or employment type.Regular full-time employees are eligible for benefits - see here.
#LI-KB1
Client-provided location(s): Dallas, TX
Job ID: og7axfwO-CnsfVfwJ
Employment Type: OTHER
Posted: 2026-01-29T23:33:33
Perks and Benefits
Health and Wellness
Parental Benefits
Work Flexibility
Office Life and Perks
Vacation and Time Off
Financial and Retirement
Professional Development
Diversity and Inclusion