Senior Data Scientist
If you’re looking for a career that transforms, inspires, challenges, and rewards you, then come join us! Verisk Analytics is a global supplier of risk assessment services and decision analytics for customers in a variety of markets, including insurance, healthcare, financial services, supply chain, and others. We’re a thriving public company with solid revenue growth and earnings and offices worldwide. And we’re continually looking for ways to augment our existing markets and expand into new markets with excellent growth potential. At Verisk, you’ll be part of an organization that’s committed to serving the long-term interests of our stakeholders, including the communities where we operate.
Our Claims Analytics business is seeking an experienced Senior Data Scientist to serve as a technical resource in the conception and development of new predictive modeling initiatives. *Candidates could work out of either Jersey City, NJ, Lehi, UT, San Francisco, North Reading, MA or Columbia, SC office.
- Suggest and develop innovative analytic methods that result in a technically superior product and/or create a competitive advantage, as well as meet design requirements and project timeline
- Research, evaluate, and recommend internal and external data sources and coordinate with data resources
- Serve as the senior technical person on data cleansing, variable creation, variable transformation, etc., as well as best-practices in the creation of analytic datasets
- Serve as the consummate technical person in model development and validation analyses – from driving pragmatic practice of methods to devising novel solutions and diagnostic measures, to coaching and mentoring junior staff
- Act as senior technical person in development and execution of methods to address needed business diagnostics; review, and aid productization and deployment
- Provide significant input to Product Management on implementation specifications and production testing
- Act as senior technical person to develop process and metrics to monitor model performance
- Review reports and make recommendations for needed model refits / enhancements
- Keep abreast of business trends / product needs
- Research literature to stay current on technical methods to solve specific problems
- Graduate degree (M.S. required, Ph.D. preferred) in a quantitative discipline
- 2+ years professional experience building predictive and descriptive models
- Exposure to the property & casualty industry is desirable, and experience with medical, clinical, fraud & abuse and pharmacy data analytics is a big plus
- Experience, and expertise in diverse statistical and data mining techniques (e.g. – GLM/Regression, Boosting, Random Forest, Trees, Clustering, PCA, SVM, text mining, social network analysis etc.)
- Demonstrated proficiency with statistical packages in Spark ML-Lib, Python, R, or SAS is a must
- Ability to program in Python, Spark, Scala, R, or SAS is highly desirable
- Understanding of RDBMs and interactive SQL programming skills are a must
- Experience with Big Data technologies like Hadoop, Spark, Hive, NoSQL, etc., and Cloud technologies (AWS, Azure, etc.) is highly desirable
- Aptitude for picking up new technologies is expected
We offer an excellent compensation package. Our benefits package is competitive and includes full healthcare options, a 401(k) plan, and a generous Paid-Time-Off program.
All members of the Verisk Analytics Family of Companies are equal opportunity employers. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other legally protected classification.
Location: CA-San Francisco, MA-North Reading, NJ-Jersey City, SC-Columbia, UT-Lehi
Activation Date: Monday, December 12, 2016
Expiration Date: Saturday, April 1, 2017
Meet Some of Verisk Insurance Solutions's Employees
Manager, Analytical Data Management
Richard identifies the data quality needs and requirements of Verisk customers. He focuses on finding new ways to improve and increase the efficiency of data procedures.
Back to top