The first step to better model analytics.
Complete this 7-minute survey and T1A will return an assessment to you within 24 hours

Your assessment includes:

* Model Life-cycle Strengths and Weaknesses vs. Industry Peers

* Opportunities to Reduce Model Time to Market

* Opportunities to Reduce Model Risk

* Recommendations to discuss with a T1A expert, when you're ready

Name
Work Email
Organization
Role
 
Data Science Department
We have a center of excellence for Data Science and best practices are easily shared.
Total number of data scientists
We have a Model Risk Management team
All data scientists across the organization share data, infrastructure and development tools
1 - strongly disagree, 5 - strongly agree
1
5
 
Lifecycle - Data Preparation
All of our data scientists have data cleansing and prep part of their core skillset. Everyone is expected to be responsible for cleansing their own data.
1 - strongly disagree, 5 - strongly agree
1
5
We maintain a layer of pre-aggregated data for machine learning. It is rare to require analysis and building of new aggregations for data sources
1 - strongly disagree, 5 - strongly agree
1
5
We maintain Analytic Base Tables (ABT), which are reused for multiple models
1 - strongly disagree, 5 - strongly agree
1
5
We maintain a meta-data storage. Data scientists are aware of all existing features available
1 - strongly disagree, 5 - strongly agree
1
5
We utilize a Data Version Control (DVC) solution.
1 - strongly disagree, 5 - strongly agree
1
5
We use a low code or no code solution for feature design and development
1 - strongly disagree, 5 - strongly agree
1
5
We operate a Feature Store solution.
1 - strongly disagree, 5 - strongly agree
1
5
Unstructured (non-relational) data is frequently used in our models.
1 - strongly disagree, 5 - strongly agree
1
5
 
Lifecycle - Model Development
We use the following compute resources for training:
Our most common Machine Learnings tools are:
Lifecycle - Production Environment
Multiple Machine Learning Framework and Programming Language Support
1 - strongly disagree, 5 - strongly agree
1
5
GPU compute available for training
1 - strongly disagree, 5 - strongly agree
1
5
Visual Low Code or No Code machine learning tool available
1 - strongly disagree, 5 - strongly agree
1
5
AutoML (Automated Machine Learning) tool available.
1 - strongly disagree, 5 - strongly agree
1
5
Big Data machine learning frameworks (e.g. Apache Spark) available
1 - strongly disagree, 5 - strongly agree
1
5
Enables Project Packaging for reproducibility purposes
1 - strongly disagree, 5 - strongly agree
1
5
 
Lifecycle - Deployment
We use the following machine learning deployment practices:
Data scientists can autonomously deploy models
1 - strongly disagree, 5 - strongly agree
1
5
The average number of days between building a non-production model and promoting it to production is:
 
Lifecycle - Execution
We are often concerned about how deployed models will perform when volumes scale
1 - strongly disagree, 5 - strongly agree
1
5
We support both batch and real-time inferences
1 - strongly disagree, 5 - strongly agree
1
5
We use Model-as-a-Service API when integrating with business applications
1 - strongly disagree, 5 - strongly agree
1
5
 
Lifecycle - Monitoring
We often experience model drift after a model is deployed. We have missed opportunities or experienced leakage due to it
1 - strongly disagree, 5 - strongly agree
1
5
We have reports to monitor every models performance as soon as it is live in production
1 - strongly disagree, 5 - strongly agree
1
5
We have a test library that is easy to use for any model
1 - strongly disagree, 5 - strongly agree
1
5
We monitor both Business and Statistics performance for every model
1 - strongly disagree, 5 - strongly agree
1
5
Whenever model performance is below a threshold, responsible parties are alerted automatically
1 - strongly disagree, 5 - strongly agree
1
5
We monitor data drift and stability
1 - strongly disagree, 5 - strongly agree
1
5
We closely monitor the inference environment for latency, uptime and resource utilization
1 - strongly disagree, 5 - strongly agree
1
5
 
Lifecycle - Retrining & Calibration
We use a tool to perform retaining and calibration automatically
1 - strongly disagree, 5 - strongly agree
1
5
We maintain a current model catalogue, which inventories all models and supporting artifacts
1 - strongly disagree, 5 - strongly agree
1
5
We have model audit and governance processes in place
1 - strongly disagree, 5 - strongly agree
1
5
Our Proof of Concept models are often halted before they go to production due to data governance and security controls
1 - strongly disagree, 5 - strongly agree
1
5
This survey is free. We will analyze your data and contact you soon.
By sending this form you agree to our privacy policy.