Machine Learning Engineer/Data Scientist
- Bangalore, India
Job Requisition ID #
Machine Learning Engineer/Data Scientist
Autodesk is seeking a Machine Learning Engineer/Data Scientist to join our Sales Data Science team. We build innovative data products and machine learning solutions for Autodesk's sales teams. In this critical role, you will work alongside product development, product managers, and data engineers to tackle fundamental data science models in our Asia-Pacific (APAC) sales region.
Data, automation and advanced analytics technologies are drastically transforming our sales team in APAC and this person will be the data scientist in charge of overseeing our data science practice in Singapore, India, China, Korea, Australia, and New Zealand. As the Data Scientist for APAC, your primary responsibility will be to empower our sales teams with machine learning models and data analytics to make them more productive and better equipped to be customer-centric. You will collaborate with our Data Scientists in Barcelona and the US to build major data science products.
The ideal candidate is a strong communicator and has experience as a Data Analyst or Data Engineer who has strong Data Science leanings, and has built out multiple analytic models and machine learning algorithms before.
You will be in charge of establishing and maintaining machine learning deployment pipelines, including their associated life-cycle management systems and practices, in coordination with their architecture peers and communities of practice throughout the company
Your responsibilities will be:
- Designing and implementing Machine Learning models and algorithms that enable account selection, customer targeting, and process improvements for the sales teams in APAC.
- Develop and maintain model deployment pipelines for many types of machine learning including supervised and unsupervised learning as well as CNNs, RNNs or other deep learning algorithms
- Working closely with data scientists, domain experts and sales team, both to understand model performance management requirements and design suitable inferencing instrumentation systems and practices that meet them
- Designing and implementing outbound data engineering pipelines that serve curated datasets to business intelligence and reporting
- Designing integration solutions including applications as needed to deliver inferencing outcomes or curated data sets for consumption and action
- Ensuring your model deployment, outbound data engineering and integration pipelines are architecturally and operationally integrated with inbound ingestion and contextualization pipelines designed by your peer domain architects
- Delivering and presenting results to sales leaders regarding their business, forecast, pipeline, and potential customers.
Education & Experience:
- Very strong communicator
- Advanced degrees in computer science and data science strongly preferred, though an equivalent level engineering, data science or mathematics degree, a technical undergraduate degree and relevant experience will also be considered
- 5+ plus years of relevant work experience
- 3+ years of experience working with data scientists in a data engineering or production machine learning inferencing capacity, working with various types of supervised and unsupervised learning algorithms for classification, recommendation, anomaly detection, clustering and segmentation, as well as CNNs, RNNs or other deep learning algorithms
- 5+ years of full-stack experience developing large scale distributed systems and multi-tier applications
- 5+ years of programming proficiency in, at least, one modern JVM language (e.g. Java, Kotlin, Scala) and at least one other high-level programming language such as Python
- 2+ years of production DevOps experience
- 3+ years of programming on the Apache Spark platform, leveraging both low level RDD and MLlib APIs and the higher-level APIs (SparkContext, DataFrames, DataSets, GraphFrames, SparkSQL, SparkML).
- Demonstrated deep understanding of Spark core architecture including physical plans, DAGs, UDFs, job management and resource management
- At least 1 year of implementation experience with Apache Airflow, and a demonstrated expert level understanding of both segmented and unsegmented Directed Acyclic Graphs and their operationalization
- Experience working with Neo4J and a demonstrated ability to lead architecture efforts for its implementation
- Strong technical collaboration and communication skills
- Passion for sales and customer segmentation
- Proficiency with functional programming methods and their appropriate use in distributed systems
- Proficiency with AWS foundational compute services, including S3 and EC2, ECS and EKS, IAM and CloudWatch
- Proficiency with Sagemaker, Kubernetes, and Docker
- Experience with data science toolkits like: R, Pandas, Jupyter, scikit, TensorFlow, etc.
- Experience with Sagemaker and data pipelines in AWS
- Familiarity with statistics concepts and analysis, e.g. hypothesis testing, regression, etc.
- Experience building dashboards in platform: Power BI, Tableau, Qlik, Looker, etc.
- Salesforce experience is a plus
At Autodesk, we're building a diverse workplace and an inclusive culture to give more people the chance to imagine, design, and make a better world. Autodesk is proud to be an equal opportunity employer and considers all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender, gender identity, national origin, disability, veteran status or any other legally protected characteristic. We also consider for employment all qualified applicants regardless of criminal histories, consistent with applicable law.
Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site). If you have any questions or require support, contact Autodesk Careers .
Back to top