Model Factory for Modeling at Scale


Two Hat Security is a Canadian technology company that uses Artificial Intelligence to moderate content and filter profanity for its global clients. Their developers and data scientists make custom machine learning models to automate content moderation. These models help  filter the negative and build on the positive, Two Hat’s predictive moderation services enable clients to prevent harmful content while delivering positive user experiences.


Two Hat had an inconsistent data-scientist driven pipeline for model development. With more analysts joining the team and more data capabilities, Laurence Brockman, the Chief Technology Officer, needed a solution that was consistent, scalable, and accessible to the entire development team, and not just data scientists.


Two Hat had a problem with its model creation pipeline. Lisa Wood, Research and Data Science Director, wanted a consistent model creation method that was accessible to everyone.

Firstly, every data scientist had a unique means of getting things done. And not all analysts had the skills to use data science tools. She required a solution that would “consistently unify the company’s model creation process across the business.”

The problem, according to Wood, was not just about creating models for exploration but to be able to create models that could go into their products (production models). The models had to be customized for each customer, and that process also had to be scalable.

The company was struggling to handle and manage several of the custom models that were going into production. With the goal of maintaining an exceptional product and customer service, Two Hat knew they needed a quick and consistent solution to develop production ML.

They explored many options.

Tools from Microsoft, Google, and AWS provided only fragmented solutions. Some merely focused on one-off models. Others could process large amounts of data but they weren’t able to deal with a large number of models. Another could only accommodate the training of each model a research experiment, an approach Two Hat realized would prove to be too time-consuming.

Using AWS was too burdensome and fragmented. For example, it could use S3 buckets to store data, then do model development in Sagemaker. But that lacks a model deployment function for which you go to one more place. There are plugins to patch it all together but those pieces need to be sorted out and paid for separately. There were so many different parts of the solution, merely trying to fit all the pieces together was cumbersome and confusing.

The process could be made easier with more people, but no company can just keep adding people to its team. Even if it did, the cost of doing so would outweigh the solution’s benefits. The company, like many others, had to do things fast, and at scale.

All these factors were causing the development environment at Two Hat to become a little chaotic. People used their own tools, working in creative silos, and using cumbersome email chains to communicate with other departments.

The team’s frustration was growing. As one member put it: “I’ll be so happy when I never ever ever ever ever have to use another conversation about what version of the model was used. I have wasted many hours of my life having those conversations.”


Two Hat deployed Braintoy’s mlOS, a production machine learning platform that allowed their team to collaborate.


mlOS is a production machine learning platform that provides teams with a uniform, fast, and repeatable ways to build, deploy, and manage models at scale. It an end-to-end solution that has everything development teams need in one place. Easy-to-use and very accessible, mlOS is a game-changing ML platform for developers of all skill levels and teams. With mlOS, team members can collaborate within the platform, reducing the need for communication via email.


mlOS enabled a streamlined ML DevOps, something that isn’t possible with research-oriented tools like Sagemaker and Azure, which are alright for one-off models but not practical for building machine learning at scale. Other platforms focused on models as one-off elements, but Two Hat needed to also think about building, managing and monitoring models at scale.

All-in-all, Two Hat’s problems are resolved. The best part – every type of machine learning problem could be solved on mlOS with the team working together as one. Two Hat was able to increase its competitiveness with a more convenient and cost-effective development workflow.

  • mlOS was consistent, scalable, and not just data scientist specific. It could be used by everyone on the team.
  • Even analysts who did not have the necessary skills to use data science tools could model.
  • mlOS consistently unified the company’s model-creation process across the business.
  • Models could go into their products rapidly.
  • All team members could collaborate seamlessly.
  • Every model created could be customized for each customer and a repeatable pipeline established.

About Braintoy

“Working with Braintoy has been a pleasure. Braintoy provided us with an ideal solution that met our needs perfectly. The company actually took the time out to sit with us and see how we used their product. This alone is one of the most significant principles/habits that a company can have. While sitting with us and going through how the solution was used at Two Hat, Braintoy also provided valuable suggestions and fixes that only upped the solution’s oomph factor. Every tweak made in our favor by Braintoy was done on time. They made the most complex of tasks seem so easy.

The company was very responsive and put in a lot of effort to understand precisely how we wanted to use the product and where we wanted to go with it. Then, they made the appropriate changes to the product so it could go where we needed it to go to meet our company’s requirements. We had assumed that the product was BI oriented, but the more we worked with Braintoy, it was more apparent it became that BI was just a part of the package.”