The 7 unique challenges in AI

AI is not like working with traditional IT. It brings some very unique challenges that you should know before you make the jump into working with artificial intelligence.

I listed the challenges here in a “quick-n-dirty” guide format, to save you from ready a novel. I won’t go into how address the challenges here, but only about what you need to be aware of. The solutions will probably be a post of it’s own per challenge at some point.

 

The challenges

1. When is the Ai good enough?

How good should your AI be for it to be good enough? In many cases, it is difficult to know before you have made your AI, as the actual effect of the AI can’t be exactly predicted. Individual users will have different experiences and perspectives so you have to try the AI in real life before knowing what it does to its environment.

The trust we have in AI is also not the same as we have in people. For example, would you interpret your doctor's statement “I'm 90% sure” as you would interpret an AI's?

When working with AI, you just don’t know if the AI is ready or needs more work before it’s been applied. And even worse - there is a risk that the AI ​​that worked perfect today will not be sufficient tomorrow. Imagine an AI catching credit card fraud. As soon as you release the AI into the wild, the scammers will change behavior and you might have to get back to work.

 

2. Mistakes are much worse

When an AI makes a mistake, it is typically worse than when a human makes a mistake. When an AI fails it does not give us the same feeling as when a human fails. When an AI fails to classify a, for humans completely obvious case, then it doesn’t matter if the AI generally outperforms humans. The AI ​​will be perceived as stupid or at least very untrustworthy. This is very often the case with AI.

Another good example is when we hear about a Tesla that has crashed somewhere in the world. It’s an isolated case, yet it is enough to make headlines in the world press because it was a self-driving car. At the same time, thousands of drunk drivers are at the wheel all the time crashing and doing a lot damage.

 

3. Communication

When communicating with users, customers or other stakeholders, there will be some unique challenges. It is not even certain that you have the same language and perception on what it means to work with AI. Aligning expectations about what an AI solutions really entails is essential, as the AI ​​is quickly perceived as a slightly elusive substance and as a result expectations may be very different.

In AI, people often talk about "blackbox" solutions. The AI ​​that often is just a model big lots of data adjusting weights and activation functions can be impossible to explain. However, users often have an urge to understand how AI works. And once you have opened the box, it can be confusing more than beneficial for the user. 

4. Estimation is harder

There are many ways to estimate how long it takes to make a piece of software. Basically, most are based on breaking down the task into very small parts, which are easy to estimate individually. In the development of artificial intelligence, the case is a little different. In AI you can not in the same way break down AI development in to small parts and estimate them. Each step in the AI-development process can reveal either a major progress or a blind spot sending you back to square one.

Because you do not know what works in advance, the process becomes more experimental. Even in situations where you can look others over the shoulder and get inspired by how similar problems have been solved, small differences in the problem can make a huge difference in the challenges you face.

So in AI, you have to be a little more careful about telling your stakeholders about a deadline. It has probably always been a challenge with estimation in IT, but in AI the problem is even worse.

5. Uncharted waters

In Paperflow , we often found ourselves faced with a technical problem that others had not solved before. In classic IT, there are thousands of best practices, and known solutions to the problems you will encounter. Not that it is always easy, but there is a lot of inspiration to be found in other solutions. In AI, I have experienced more than ever, facing problems that had not been solved before. In these cases you must either decide that you have hit a dead end and make a uturn, or you will be the icebreaker that paves the way for the others. Both decisions comes with a cost.

 

6. You need more experts

I have experienced a clear difference in the team you have to put together in AI development versus the traditional IT development. In the good old days, you could settle for backenders and frontenders, but now you also need data scientists or machine learning engineers. On top of that the requirements for the server setup are higher and you might have a earlier need for a DevOps. The team that is to be used to develop AI is therefore often more expensive, as several different experts are needed. This applies less if working with AutoML solutions though.

 

7. No data - no AI

In traditional IT you will see the “product” you develop as being the code base you apply to a server or a computer. It is easy to copy and move and is quite static in its existence. In AI, you have to see the data as part of the product.

In fact, data is many times the most expensive in the development of AI. In many of the AI ​​companies I have talked to, the collection and cleaning of data is a greater cost than the development of the AI ​​models.

If the world changes - as it always does - then over time your data will become less and less accurate. So you often have to find a way to continuously feed his model with new data. As a result you need to have a dedicated data operations if the domain you work in requires fresh data.

And then we have not talked about data and GDPR at all...

Previous
Previous

Ethics are not relevant to AI