As the dental industry gets more competitive, dental labs will need to turn to new technologies just to survive. Artificial intelligence can give these labs the edge they need in a crowded marketplace. Sergei Azernikov, vice president of software development at Glidewell, recently discussed the impact that AI can have on dental labs with the Dental AI Council (DAIC).
DAIC: How did you first come to recognize the potential of AI?
Sergei: It is a long story, but an innovative, forward-thinking one. Glidewell is one of the largest providers of patient-specific products for restorative dentistry in the United States. Probably in the world. We produce thousands of custom-made dental solutions daily, including restorations, implants, and appliances. These require a high level of precision and consistency.
Traditionally, dental manufacturing has relied heavily on highly skilled technicians. We employ about 5,000 professionals. A typical lab is five to 10 people. A major challenge for us is maintaining a consistent level of quality throughout the enterprise. This includes the various facility locations, as well as technicians of different backgrounds. It’s very difficult.
I would say that the first big step in addressing this challenge was the introduction of CAD/CAM during the last decade. CAD/CAM allowed technicians to focus more on the qualitative aspects of the product. Now we have AI built on top of the CAD/CAM system.
The advantage of AI is that these systems are not originally programmed for repetitive tasks. AI systems instead are flexible and adaptive to the context. This makes them well-suited for the dental industry. In dentistry, every tooth is unique. No two individual teeth are identical. This makes dentistry a perfect test case for machine learning (ML) and AI.
Back in 2015, we began thinking about how to apply AI in our domain. We asked ourselves, “Can we train a machine to recognize individual teeth?” It’s easy for a trained technician, but not for an untrained operator. So, we trained the AlexNet network, which was a standard architecture, to identify teeth.
We set up a competition between one of our most experienced technicians and the neural network. And you can imagine what happened. Even though the technician was very experienced, he did make some mistakes. But the network was accurate 100% of the time!
This gave us confidence that the AI model can learn dental anatomy, and, as such, it can be applied to many different applications. That’s how we convinced the company to invest in AI. And five years later, we have machine learning already deployed in many different aspects of the enterprise.
DAIC: Can you provide some examples?
Sergei: Sure. The first step was CAD/CAM, as I mentioned. The next challenge, I would say, was migrating to cloud computing. This is also extremely critical. The digital data we generated was spread across local drives. It wasn’t easy to identify cases, and it was difficult to harvest the data in that form. So, one of our early decisions was to adopt cloud computing.
We were quite early adopters of this technology, about seven years ago, when it was really just taking off. We went to one of the first AWS re:Invent events. It was a small event back then, about 5,000 attendees. I think last year it was around 50,000. At that point, we decided that we would move our infrastructure to AWS. Today, we have more than 10 million cases stored there, which makes it far more convenient to harvest for our machine learning.
That was a necessary precondition to deploy AI later. If you look at our enterprise today, we’re really pushing this technology into all aspects of our business. Starting from receiving, where we identify what’s coming in the box from the doctor, we recognize what type of tray came out, what type of impression, which tooth was prepared by the doctor, what type of restoration is needed. All that was originally typed in manually. Machine learning models do it automatically and more precisely.
In converting to digitization, we deploy CT scanners now instead of a traditional plaster room. This technology has been used for orthodontic applications for some time now, but in restorative dentistry we’re a pioneer with this. We make the CT scanners in house, which is quite unique in the industry. So we scan those impressions, and then the AI resolves the data and cleans it up.
These scans feed into our automated design systems that use a generative-style network, one of our major breakthroughs, and obtain the design information from there. It goes to our digital factory, where we have heavily automated processes that use robotics and milling machines. We also produce those here at Glidewell. And finally, when the restoration is ready, we have the inspection system that verifies that we made what we designed and that we ship the appropriate crown to the right patient at the right doctor.
As you can see, it’s really covering the whole manufacturing process. We’re now taking it to the next level. We incorporate the feedback from the doctor, if there is any, to improve our quality. The result is a closed loop where we try to improve all steps based on customer satisfaction. That’s where we want to be in the near future.
DAIC: When you were talking about cloud storage and the data that you have, obviously we need all the data for the machine learning to function and learn. What does your data look like?
Sergei: In a nutshell, the 3D models and the completed designs. That’s the basic data that is stored, but in the process we generate much more information about the product. There’s a million parameters, mainly for manufacturing the material parameters, and all of this additional data is stored. We don’t leverage all of it yet, but in the future I’m sure we’ll be able to use pretty much everything to analyze and improve our processes. We shoot for a 360-degree view of the case and our customer to provide the best service possible.
DAIC: You talked about closing the loop and getting feedback from the providers. Is that part of what will enable you to use some of that data that you aren’t using now?
Sergei: Right. Jumping a little bit ahead, there are two modes. There is an, I’ll say, post-factum mode, where if the customer is not happy, we analyze why they’re not happy. And we try to trace the source of the problem and focus on the solution.
But there is also a situation I envision, and we’re quite close to it, where the doctor can be part of the process. Think about the way people are used to being able to see the status of their order on Amazon, for example. They see exactly where their package is. Today we still get calls from the doctors asking, “Where’s my case?” We really want the doctor to see where their case is at any point in the process, as well as provide feedback.
For example, they would be able to see the design file and could approve or disapprove it. They could even fix it to their liking. Each doctor’s requirements differ somewhat, so we should be able to allow them to look at the case and say, for example, “I want this contact to be tighter,” or “I want the occlusion to be higher or lower.”
We envision them being able to do it themselves through a web portal. This is quite achievable today with the level of integration we have, and with the cloud backend that we just discussed. I think that’s ultimately where the future will be, and doctors will expect that.
DAIC: Is there a single aspect of your enterprise that has gained, or stands to gain, the most from these technologies?
Sergei: One of the biggest breakthroughs that we were able to achieve was in design automation. It is difficult to maintain consistent quality across hundreds or thousands of designers. We’ve been working on design automation as long as I’ve been with the company—the last eight years.
We only recently achieved a significant breakthrough based on Generative Adversarial Networks (GANs), as I mentioned. The technology was named one of the biggest breakthroughs in 2018 by MIT Technology Review. But the potential isn’t fully realized. The idea is that you have a system that generates realistic images that are difficult to distinguish from real ones. Today, most people use it for those notorious deep fakes, fake videos, and fake photos.
We were able to use this for the customers’ benefit. We generate aesthetically pleasing dental restorations that are precise in terms of functionality and fit. We’re collaborating with UC Berkeley. I would say the success of this project was quite remarkable. Usually it takes years to get something from academia to production. We were able to bring something from prototype to working production system in less than a year. And we are deploying it today throughout the enterprise.
So, we practically eliminate the need for a human designer. The designer, in turn, becomes a QC person. They’re looking at the final design. If approved, they pass it. In most cases, they like it. If they don’t, it usually takes less than a minute to adjust. As a result, our designers are able to increase their throughput by more than a hundred percent.
We see it as one of the biggest tangible achievements of AI so far for our company. We published the results of this at leading technology events and it made waves in the AI/ML community beyond the dental space. It’s one of the first major tangible applications of generative adversarial network technology so far. In fact, I’m not aware of anybody else who is using it in the physical realm. So that’s quite exciting.
DAIC: You mentioned less reliance on human designers in your production process, so what’s been the impact on the human workforce associated with integrating AI into your enterprise?
Sergei: There will be an impact on the workforce. And it’s already happening, not only from AI, but also from other digital technologies. I mentioned CT scanning before. It’s essentially making the whole plaster room obsolete, and that’s one of the most labor-intensive steps in the process today.
So, we’ll see a huge shift in labor costs and also in type of jobs. Instead of relying on low-skilled labor, we’ll need people to operate scanners. We’ll need labeled data to train neural networks. In fact, we already have people doing that on my team. We’ve been pulling people from production—dental technicians who are interested in making a career change. We train them to label data for us, operate scanners, and test software. Some are even becoming software engineers, which is, I think, quite remarkable.
DAIC: Where does the dental industry overall stand in terms of AI adoption today?
Sergei: The dental industry is in the earliest stages, relatively, to other industries. One good example is the adoption rate of intraoral scanners. It’s a first step in this digitalization path. It’s how you create digital data. So, if you look at the adoption rate of internal scanners, this technology has existed on the market for decades, yet more than 80% of US dentists still use physical impressions.
The adoption rate is actually accelerating today, especially due to the COVID situation, where people can consider sending impressions unsanitary. But, still, I think overall the industry has a long way to go comparable to other advanced fields. Digital dentistry is still in its infancy.
DAIC: What do you see as the path forward to better AI integration in the industry overall?
Sergei: Well, there are different perspectives, obviously. There are different layers: doctors, their labs, and their patients. More and more pressure can come from the patient’s side and competition. Companies like SmileDirectClub, Byte, and others are startups that essentially circumvent a doctor and go directly to a consumer. This puts a lot of pressure on the doctors to modernize, become more efficient, and provide better and more affordable care.
For example, with COVID restrictions, some doctors have adopted remote care options. There are companies like Dental Monitoring and Grin that provide the option for you to monitor yourself. They allow patients to provide videos taken on their cell phone to let the doctor remotely review their progress.
Another issue is competition between the doctors themselves. Many want to improve the quality of care to show patients they are more objective. They employ things like intraoral scanning and automatic diagnostic techniques. I think this will be the first step for doctors to get into that AI/ML realm.
From the lab perspective, doctors will also start adopting more and more chairside systems, like internal scanning plus design software and milling machines. So, the doctor would be able to make a restoration chairside. We have a product like that, and we see quite an increased demand for it.
I see in the future, as generations change and as more younger doctors get into the profession, more and more doctors will be willing to adopt these new technologies. And that will put pressure on labs to stay competitive and grow and provide better, faster care. We now offer our customers three-day turnaround to compete with chairside systems. There are all these factors that will put pressure on different players. Ultimately, they will need to adopt more advanced digital workflows to stay competitive.
DAIC: Before we wrap up, can you touch on any weaknesses you see in current digital systems?
Sergei: If you look at the type of machine learning that has been used today for 99.9% of commercial applications, it’s what’s called fully supervised learning. This means that all the data for training is provided at training, and after that, the model doesn’t change. If you need to make some changes, you have to retrain the model. That means the model is only as good as the data we feed into it at the training phase, and it can only be used for the specific task it was trained for.
So, for example, if you train a model to make single units, it will never be able to generate bridges. If the type of input changes, the model cannot adapt to it. If you train a network to identify a certain type of pathology, it can only detect those pathologies, and if you try to look for something slightly different, it can’t identify it.
What we instead really want is to have a system that will be able to learn dynamically as it goes, as a human does. It’s still a challenge with the current technology, but I’m sure that’s where it’s headed. We’ll have systems with what’s called permanent learning—real-time learning where it adjusts itself based on either human feedback or new data.
Another challenge is long-tail problems, where we have very scarce data about certain situations. Current algorithms require massive amounts of data to train something stable. If we have a pathology that doesn’t occur very often, it’s quite difficult to train a model to identify that.
These are, in my mind, the major roadblocks, or the major challenges that need to be solved by the AI/ML community in general. And it will make a good many things easier in the future. Sooner or later, it’s going to happen.
Sergei Azernikov is vice president of software development at Glidewell, where he leads development of AI technologies that will shape the dental industry of the future.
The Dental AI Council (DAIC) is a non-profit devoted to helping define the future of artificial intelligence (AI) in dentistry through research, education, and thought leadership. To join to the DAIC’s effort, visit dentalaicouncil.org/membership.