Making a deep learning project a reality

palazzo di vetro

The effectiveness of deep learning in computer vision is now more than proven. As a result, many manufacturing companies would like to implement or at least run an initial project leveraging deep learning, particularly if someone has shown them a presumed feasibility based on the evaluation of a set of images. But from theory to practice can present several challenges, because there can be a huge gap between a rough working model and a working solution in production. Not only that, but processes and fabricated parts will evolve, and the deep learning solution will need to be able to cope with inevitable changes.

A deep learning approach is fundamentally a data-driven solution. This means, first and foremost, that acquisition systems are critical-here we can say that “if garbage comes in, garbage is what we will get.” In computer vision, the easier it is on an image to distinguish defective from nondefective regions for the human eye, the easier it will be for a computer. While it is true that deep learning withstands changes in the image acquisition process better than traditional computer vision solutions, do not expect it to detect things that a trained human would not be able to distinguish. Therefore, first of all, the imaging strategy must be designed and implemented in the right way.

A second consideration is that vertical understanding of the data is critical. Deep learning algorithms and the means to train them are becoming primary products, while vertical understanding of the data and the domains in which they originate are not. Optimizing the architecture of the model may give a few more points of accuracy, but the real tipping point is the quality of the data. To understand this better, let’s say that after a few weeks of collecting and annotating data, you create a model that seems to work well. You test it in production, but at that time you realize that the quality specifications have been misunderstood thus exaggerating the defect tolerance. To fix the problem, all the data must be noted again, which leads to weeks of delay in the project as well as being an economic loss.

In a factory, things often tend to change and unpredictable cases always appear. As a result, the user cannot expect a model trained on a few images from a specific day in a specific configuration to generalize forever to all possible cases. This means that deep learning models must be maintained over time and must have the ability to adapt quickly.

In summary, implementing a vision solution based on deep learning is a delicate task, not because of the algorithms used, but because of the care that must be applied to the selection and use of data and its traceability, as well as the maintenance of models over time.

Share this post