What is data operations (DataOps)?
When I write about AI I very often refer to data operations and how important a foundation it is for most AI solutions. Without proper data operations you can easily get to a point where handling the necessary data will be too difficult and costly for the AI business case to make sense. So to clarify a little I wanted to give you some insight on what it really means.
Data operations is the process of obtaining, cleaning, storing and delivering data in a secure and cost effective manner. It’s a mix of business strategy, DevOps and data science and is the underlying supply chain for many big data and AI solutions.
Data operations was originally coined in Big Data regi but has become a more broadly used term in the later years.
Data operations is the most important competitive advantage
As I have mentioned in a lot of previous posts, I see the data operations as a higher priority than algorithm development when it comes to trying to beat the competition. In most AI cases the algorithms used are standard AI algorithms from standard frameworks that are fed data, trained and tuned a little before being deployed. So since the underlying algorithms are largely the same the real difference is in the data. The work that goes into to get good results from high quality data is almost nothing compared to the amount of work it takes when using mediocre data. Getting data at a lower cost than the competition is also a really important factor. Especially in AI cases that require a continuous flow of new data. In these cases getting new data all the time can become an economic burden that will weigh down the business.
Data operations Paperflow example
To make it more concrete I wanted to use the AI company I co-founded Paperflow as an example. Paperflow is an AI company that receives invoices and other financial documents and captures data such as invoice date, amounts and invoice lines. Since invoices can look very different and the layout of invoices changes over time, getting a lot and getting more data all the time is necessary. So to make Paperflow a good business we needed good data operations.
To be honest we weren't that aware of the importance when we made these initial decisions but luckily we got it right. Our first major decision in the data operations was that we wanted to collect all data in-house and make our own system for collecting data. That’s a costly investment with both a high investment into the initial system development but also a high recurring cost to our employees with the job of entering data from invoices into the system. The competition had chosen another strategy. They instead had the customers enter the invoice data to their system when their AI failed to make the right prediction on the captured data. That’s a much cheaper strategy that can provide you with a lot of data. The only problem is that customers only have one thing in mind and that is to solve their own problems disregarding if it is correct or not in terms of what you need for training data.
So in Paperflow we found a way to get better data. But how do you get the costs down then?
A part of the solution was heavily investing in the system that was used for entering data and trying to make it as fast to use as possible. It was really trial and error and it took a lot of work. Without having the actual numbers I guess we invested more in the actual data operating systems than the AI.
Another part of the solution was to make sure we only collected the data we actually needed. This is a common challenge in data operations since it’s very difficult to know what data you are going to need in the future. Our solution was to first go for collecting a lot of data (and too much) and then slowly narrowing down the amount of data collected. Going the other way around can be difficult. If we had suddenly started to collect more data on each invoice we would basically have needed to start over and discard all previously validated invoices.
We also started to work a lot on understanding a very important metric. When were our AI guesses so correct that we trust it and avoid to validate a part of the data. That was achieved with a variety of different tricks and technologies one of them being probabilistic programming. Probabilistic programming has the advantage of delivering a uncertainty distribution instead of a percentage that most machine learning algorithms will do. By knowing how sure you are that you are such significantly lowers the risks of making mistakes.
The strategy of only collecting data that you need the most by choosing cases where you AI is the most uncertain is also known as active learning. If you are working on your data operations for AI, you should definitely look into that.
DevOps data operation challenges
On the more tech-heavy part of storing data in an effective way you will also see challenges. I’m not a DevOps expert but I have seen the problem of suddenly having too much data that grows faster than expected in real life. That can be crucial since the scaling ability quickly is coming under pressure. If I could provide one advice here it would be to involve a DevOps early on in the architecture work. Building on a scalable foundation is much more fun than trying to find short term solutions all the time.