• Users Online: 1842
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
RESEARCH
Year : 2023  |  Volume : 23  |  Issue : 1  |  Page : 84-89

Using deep learning approaches for coloring silicone maxillofacial prostheses: A comparison of two approaches


1 Department of Prosthodontics, Faculty of Dentistry, Gazi University, Ankara, Turkey
2 Department of Computer Engineering, Faculty of Engineering, Atilim University, Ankara, Turkey
3 Department of Computer Engineering, Faculty of Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey

Date of Submission25-Mar-2022
Date of Decision14-Jun-2022
Date of Acceptance30-Jun-2022
Date of Web Publication29-Dec-2022

Correspondence Address:
Meral Kurt
Department of Prosthodontics, Faculty of Dentistry, Gazi University, Ankara 06510
Turkey
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jips.jips_149_22

Rights and Permissions
  Abstract 


Aim: This study aimed to compare the performance of two deep learning algorithms, attention-based gated recurrent unit (GRU), and the artificial neural networks (ANNs) algorithm for coloring silicone maxillofacial prostheses.
Settings and Design: This was an in vitro study.
Materials and Methods: A total of 21 silicone samples in different colors were produced with four pigments (white, yellow, red, and blue). The color of the samples was measured with a spectrophotometer, then the L*, a*, and b* values were recorded. The relationship between the L*, a*, and b* values of each sample and the amount of each pigment in the compound of the same sample was used as the training dataset, entered into each algorithm, and the prediction models were obtained. While generating the prediction model for each sample, the data of the corresponding sample assigned as the target color were excluded. L*, a*, and b* values of each target sample were entered into the obtained models separately, and recipes indicating the ratios for mixing the four pigments were predicted. The mean absolute error (MAE) and root mean square error (RMSE) values between the original recipe used in the production of each silicone and the recipe created by both prediction models for the same silicone were calculated.
Statistical Analysis Used: Data were analyzed with the Student t-test (α=0.05).
Results: The mean RMSE values and MAE values for the ANN algorithm (0.029 ± 0.0152 and 0.045 ± 0.0235, respectively) were found significantly higher than the attention-based GRU model (0.001 ± 0.0005 and 0.002 ± 0.0008, respectively) (P < 0.001).
Conclusions: Attention-based GRU model provided better performance than the ANN algorithm with respect to the MAE and RMSE values.

Keywords: Artificial neural networks, attention-based gated recurrent unit, deep learning, maxillofacial silicone


How to cite this article:
Kurt M, Kurt Z, Işık &. Using deep learning approaches for coloring silicone maxillofacial prostheses: A comparison of two approaches. J Indian Prosthodont Soc 2023;23:84-9

How to cite this URL:
Kurt M, Kurt Z, Işık &. Using deep learning approaches for coloring silicone maxillofacial prostheses: A comparison of two approaches. J Indian Prosthodont Soc [serial online] 2023 [cited 2023 Feb 6];23:84-9. Available from: https://www.j-ips.org/text.asp?2023/23/1/84/365938




  Introduction Top


Maxillofacial defects occur due to cancer, trauma, or congenital deformities. A treatment can be provided with maxillofacial prostheses that protect the defect area from external influences, and satisfies the patients aesthetically.[1],[2] It is accepted that the most important factor in the esthetic result of the maxillofacial prosthesis is the color match of the prosthesis with the patient's skin.[3]

The most commonly used method for coloring the prosthesis is called the “trial and error method.“ According to this traditional method, the selected pigments are added to the nonpolymerized silicone in small amounts and mixed, a piece of this mixture is held nearby the patient's skin to evaluate the color match, and the pigment addition is continued until the color match is completed.[4],[5] However, this method is entirely subjective because it highly depends on the maxillofacial prosthodontist's experience and color perception, and the illumination of the environment where the coloring is made can also be misleading. A failure in the coloring procedure may cause to repeat the entire production process from the beginning.[5],[6],[7]

Today instead of this time-consuming subjective method, studies are continuing on “digital methods“ in which pigmentation recipes are created to color the silicone using the data obtained from the patient's skin through color measuring devices.[7] It has been reported that these objective methods eliminate individual failures, are not affected by metamerism, and give reproducible results.[8] Due to the complex characteristics of human skin, a computer-aided color measurement system becomes a necessity in such types of operations. For this reason, a system called the “e-skin system“ has been introduced to maxillofacial prosthodontics.[6],[9] It has been reported that the e-skin system can provide clinically acceptable color-matched silicone prostheses.[6],[10] However, these systems' costs are relatively high, and they are not commonly available since they utilize special color-measuring devices.[9],[11] Furthermore, they only operate in the presence of pigments supplied by the manufacturer of the same system, which makes them quite inaccessible and expensive for ordinary specialists as well as patients.[12]

Recently, deep learning has become commonplace with its superiority in terms of prediction and classification tasks. Inspired by the human brain mimics, artificial neural network (ANN), has progressed constantly in recent years. The main motivation is absorbing the nonlinear dependency related to the input data with a combination of linear systems (called convolutions in layers) and nonlinear differentiable activation functions.[13] The previous studies[14],[15] have reviewed such potential capability so far; however, the traditional ANN structures suffer from the fact that they are limited in terms of performance when it comes to capturing the correlation between time series observation.[14],[15] Considering the limited capability of the conventional ANN family, different methodologies have been proposed. One of the deep learning models for time series sequences, recurrent neural networks (RNNs) was proposed to give insight into relationships.[16],[17] Lately, it was found that the RNN has such difficulties remembering inputs for a long period due to the vanishing gradient problem.[16] Therefore, the long short-term memory (LSTM)[18] and gated recurrent units (GRU)[19] studies are designed to constantly remember the long-term dependencies. The GRU is computationally efficient in terms of faster converging; however, when it comes to performance, the GRU is on par with LSTM. In a recent study, the attention-based model is particularly useful in interpreting and capturing the nonlinear relations between sequences.[20]

Deep learning nowadays, as part of artificial intelligence (AI) technology, is on a constant growth path owing to its ability to deal with in-depth analyses and problem cases in various fields, among them the field of medicine. As far as dentistry is concerned, AI assists specialists in examining dental images.[11],[21] In this respect, deep learning allows for not only classification but also deciding on the course of treatment and predicting disorders.[11],[22],[23] However, when it comes to color matching for maxillofacial prosthetics, there is not much research on the use of deep learning tools for such purposes.[11] This is a gap in the literature since compared to the presently – in-use skin color reproduction equipment assessment facilities, the abovestated applications could be far more economical, more publicly accessible, and more convenient for creating the right colors intended for facial prosthetics.[9],[11],[12]

Against this backdrop, this study aimed to evaluate the performance of two different deep learning approaches for coloring silicone maxillofacial prostheses. The null hypothesis was that there is no difference between two deep learning algorithms, attention-based GRU, and the ANN algorithm.


  Materials and Methods Top


Preparation of silicone samples

A total of 21 samples with different colors were produced from room temperature vulcanizing silicone elastomer. These colors were obtained using combinations of four pigments (intrinsic master colors: brilliant white [P105], blue [P116], yellow [P106], and brilliant red [P112]; Technovent Ltd., Newport, U.K.) at different concentrations. The base (M522; Principality Medical Ltd., Newport, U.K.), the catalyst components of the silicone (Original Cosmesil Tin Catalyst and Original Cosmesil Tin Crosslinker M) were mixed by 1 g: 2 drop ratio as recommended by the manufacturer. The pigments were added by weight measuring on a balance with a weight tolerance of 00.000 g (FZ120i, A&D Company, Ltd., Tokyo, Japan). The mixture was blended thoroughly with a spatula until the color was homogeneously distributed. The compounding ranges of the pigments are shown in [Table 1]. The colored mixture was placed in square-shaped stone molds of 25 mm × 25 mm × 6 mm dimensions. The molds were closed and allowed to polymerize for 24 h at room temperature. After polymerization, silicone samples were separated from the molds. The irregularities at the edges of the samples were smoothed out with scissors. To remove any remnants of the stone molds, samples were cleaned in distilled water with an ultrasonic cleaner for 10 min.
Table 1: The compounding ranges of the pigments

Click here to view


The color of the silicone samples was measured with a reflectance spectrophotometer (Konica Minolta Cm2300d; Konica Minolta, Tokyo, Japan) on a white background (L: 97.17, a: −0.11, b: 0.16) under standard measurement conditions. The device was set to standard illuminant D65, illumination geometry d/8 degree, 10° colorimetric standard observer, 8 mm in the diameter measurement area, and the average reading function of three consecutive measurements. The L*, a*, and b* values of each sample were recorded. The relationship between L*, a*, and b* values of each produced silicone sample and the amount of each pigment in the compound of the same sample was used as the training dataset and entered into each algorithm to obtain the prediction model.

The attention-based gated recurrent unit model

The approach adopted in this empirical research is a methodology based on an attention model for time series prediction. Since the generalizability of the existing GRU methods fails to draw a relationship between the L* a* b* values and white, red, blue, and yellow channels, the attention-based GRU model was employed to improve the results.

[Figure 1] provides an overview of the proposed model, where bidirectional layers were used along with GRU layers. The attention layer was applied before generating predictions using the sigmoid activation function. The proposed model consisted of about 12 million parameters and was trained by setting 300 epochs and 32 batch sizes with early stopping criteria. The same parameters were used for the ANN model, too. The optimization function was the root mean square propagation,[24] while the learning rate was assigned as 1e-4. If the performance was seen to remain unchanged on internal observations, then the learning rate was gradually reduced at 0.7 multiplications down to the minimum learning rate, which was determined as 1e-5 [Table 2].
Figure 1: The general framework of the present study. (1) Prepare the dataset by including the L* a* b* values of the silicone samples and the amounts of the four pigments in the compound. (2) Input the training dataset into the attention-based GRU algorithm to generate the prediction model. (3) Input the L* a* b* values of each sample that is assigned as the target color into the obtained prediction model. (4 and 5) Estimate recipe indicating the amount of pigments for target color based on the model (6) Measure MAE and RMSE error values for each sample. GRU: Gated recurrent unit, MAE: Mean absolute error, RMSE: Root mean square error

Click here to view
Table 2: The layers of the proposed attention-based gated recurrent unit model

Click here to view


Data preprocessing stage

The leave-one-out cross-validation technique was applied to analyze the performance of the method. The cross-validation technique is one of the more practical resampling methods to obtain an unbiased estimate of the accuracy of a learned model. In this respect, k-fold cross validation is a commonplace method in terms of better estimation of performance for a model trained on a specific dataset. In the machine learning literature,[25],[26] the value of k is usually selected as 5 or 10. However, if the value for k is fixed to the number of samples of a dataset, then each sample is considered a test sample, and the remaining ones are used for training purposes. This type of resampling methodology is called leave-one-out cross validation.[25],[26] In the present study, 21 different models were created, one for each sample. Therefore, the test results were analyzed utilizing 21 different models. However, since the data of the remaining 20 samples are very limited as a training dataset, the jittering data augmentation method was used to obtain more reliable results. Jittering is known as a practical way to improve model performance in case of limited data size. The jittering data augmentation method which is commonly used in time series was applied to the training data at each leave-one-out stage. Jittering is considered in terms of integrating the Gaussian noise with determining mean and standard deviation values.[27],[28] The sigma value was defined as 1e-4, and the mean values varied between 1e-4 and 1e-3. The training data size became 12,000 + 20 as the original 20 values were also added to the augmented data.[29] [Figure 2] shows the jittering-based time series data augmentation. The blue portions indicate the augmented data.
Figure 2: The augmented data after jittering

Click here to view


The L*, a*, and b* values of each silicone sample assigned as target color were used as input data for the prediction models obtained for that sample from both ANN and the attention-based GRU algorithms. The recipe regarding the ratios for mixing white (W), red (R), yellow (Y), and blue (B) pigments was generated as the output data. The mean absolute error (MAE) and root mean square error (RMSE) values between the original recipe used in the production of each silicone and the recipe provided by both prediction models for the same silicone were calculated with the following equations:[30]



and





ΔW, ΔR, ΔY, and Δ B indicate the difference between real amounts (W, R, Y, and B) and estimated amounts (W˜, R˜, Y˜, and B˜) of the white, red, yellow, and blue pigments, respectively. The obtained values were analyzed to evaluate the ability to predict the pigment recipe, and lower MAE and RMSE values indicate a better fit.

Statistical analyses

Data analysis was conducted with a statistical software program (IBM SPSS Statistics version 25.0; IBM Corp., Armonk, NY, USA). The normality of the data distribution was evaluated with the Kolmogorov–Smirnov test, and the homogeneity of the variances was investigated by the Levene test. Due to the normal distribution of the data, the Student's t-test was performed to compare the attention-based GRU model and the ANN algorithm. A P < 0.05 was considered to show a statistically significant result.


  Results Top


MAE and RMSE rates achieved by the attention-based GRU model and the ANN algorithms are shown in [Table 3]. The mean MAE values for the ANN algorithm (0.045 ± 0.0235) were found significantly higher than the attention-based GRU model (0.002 ± 0.0008) (P < 0.001). Similarly, the mean RMSE values for the ANN algorithm (0.029 ± 0.0152) were found significantly higher than the attention-based GRU model (0.001 ± 0.0005) (P < 0.001). The box plot of the prediction models based on the MAE and RMSE evaluation scores are illustrated in [Figure 3] and [Figure 4], respectively.
Figure 3: The box plot of the ANN and attention-based GRU models based on the MAE evaluation scores. ANN: Artificial neural network, GRU: Gated recurrent unit, MAE: Mean absolute error

Click here to view
Figure 4: The box plot of the ANN and attention-based GRU models based on the RMSE evaluation scores. ANN: Artificial neural network, GRU: Gated recurrent unit, RMSE: Root mean square error

Click here to view
Table 3: The prediction error rates achieved by attention-based gated recurrent unit and artificial neural network algorithms

Click here to view



  Discussion Top


In this study, a novel approach of attention-based GRU deep learning model is proposed to predict the pigment recipe, and the results were compared with the ANN deep learning algorithm. The proposed model successfully estimated the pigment volumes very close to the original recipe that was used to manufacture each silicone. The MAE and RMSE rates achieved by the attention-based GRU model were 0.002 ± 0.0008 and 0.001 ± 0.0005, respectively. Accordingly, the results achieved by the proposed model are seen to be clearly and substantially lower than those of the ANN algorithm. With these findings, the hypothesis was rejected because a significant difference was found between the two algorithms.

The MAE and the RMSE have been widely used as standard statistical indicators for assessing model performances. They are used to determine the success of the model by calculating the distance between the actual values and the predicted values.[30],[31] These two metrics are used in many different fields such as time series analysis, data mining, and machine learning.[32],[33],[34] While both have been used to evaluate model performance for a long time, there is still no consensus on the optimal measurement of the model error rates.[30] Thus, both were calculated in the present study.

In a study by Mine et al.,[11] two machine learning algorithms, the random forest algorithm, and ANN-based deep learning were compared with respect to skin color reproduction by determining the pigment volumes. They reported that the ANN algorithm was found more successful and promising than the random forest machine learning algorithm for maxillofacial prosthesis coloration.[11] For this reason, in the present study, the ANN method was preferred to compare with the proposed model. However, we did not find a similar study on the performance of the attention-based GRU model on skin color reproduction by predicting the compounding amount of pigments. Therefore, it is not possible to make a one-to-one comparison between the results obtained in the present study with other studies.

Evaluating the clinical outcomes of computerized systems is central in the decision to adopt the right technology in treatment processes. Over the past years, there have been new developments in research concerning skin color assessment and soft-tissue prostheses for individuals. In this direction, entire digital workflows have been published and introduced with regard to the direct printing of colored silicone prostheses.[5],[35],[36],[37] However, all of these attempts remain subject to further tests concerning their efficiency and applicability.

At present, a key element is to decide on the required pigment volumes in a way that is not only economical but also precise and targeted. To this end, there have been options available in the market, all of which are either too costly or technical. One of the advantages of this attention-based GRU model is that it can be run on a single computer with a standard central processing unit, thus enhancing the availability of the coloration system. The real-time deep learning-based skin color matching technique would further provide more economical and accessible coloration support for maxillofacial prostheses.[11]

The current study offers some important insights into the efficiency and effectiveness of the attention-based GRU model for predicting pigment volumes using L*, a*, and b* values. However, this study has some limitations, among them the inability to apply the model in real time on actual people; hence, the need for Δ E values instead of error rates. For this reason, as in the study of Mine et al.,[11] silicone coloring should be performed according to the L*, a*, and b* values measured from the skin of the human subjects. In future work, it is recommended to color the silicone, based on the proposed attention-based GRU model and calculate the color difference between the produced silicone and the human skin. In this way, the validation process of the tested approaches could be performed. Furthermore, the study should be conducted on a larger population as well as with larger training datasets.


  Conclusions Top


The attention-based GRU model is capable of predicting the pigment volumes more accurately than the ANN algorithm. This GRU model is a promising deep learning technique for the improvement of maxillofacial prostheses coloration.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Hickey AJ, Salter M. Prosthodontic and psychological factors in treating patients with congenital and craniofacial defects. J Prosthet Dent 2006;95:392-6.  Back to cited text no. 1
    
2.
Paravina RD, Majkic G, Del Mar Perez M, Kiat-Amnuay S. Color difference thresholds of maxillofacial skin replications. J Prosthodont 2009;18:618-25.  Back to cited text no. 2
    
3.
Hu X, Johnston WM. Translucency estimation for thick pigmented maxillofacial elastomer. J Dent 2011;39 Suppl 1:e2-8.  Back to cited text no. 3
    
4.
Hungerford E, Beatty MW, Marx DB, Simetich B, Wee AG. Coverage error of commercial skin pigments as compared to human facial skin tones. J Dent 2013;41:986-91.  Back to cited text no. 4
    
5.
Xiao K, Zardawi F, van Noort R, Yates JM. Color reproduction for advanced manufacture of soft tissue prostheses. J Dent 2013;41 Suppl 5:e15-23.  Back to cited text no. 5
    
6.
Karakoca Nemli S, Bankoğlu Güngör M, Bağkur M, Turhan Bal B, Kasko Arıcı Y. In vitro evaluation of color and translucency reproduction of maxillofacial prostheses using a computerized system. J Adv Prosthodont 2018;10:422-9.  Back to cited text no. 6
    
7.
Coward TJ, Seelaus R, Li SY. Computerized color formulation for African-Canadian people requiring facial prostheses: A pilot study. J Prosthodont 2008;17:327-35.  Back to cited text no. 7
    
8.
Seelaus R, Coward TJ, Li S. Coloration of silicone prostheses: Technology versus clinical perception. Is there a difference? Part 2, clinical evaluation of a pilot study. J Prosthodont 2011;20:67-73.  Back to cited text no. 8
    
9.
Mulcare DC, Coward TJ. Suitability of a mobile phone colorimeter application for use as an objective aid when matching skin color during the fabrication of a maxillofacial prosthesis. J Prosthodont 2019;28:934-43.  Back to cited text no. 9
    
10.
Kurt M, Karakoca Nemli S, Bankoğlu Güngör M, Turhan Bal B. Visual and instrumental color evaluation of computerized color matching system for color reproduction of maxillofacial prostheses. J Prosthet Dent 2021:S0022-0. In press.  Back to cited text no. 10
    
11.
Mine Y, Suzuki S, Eguchi T, Murayama T. Applying deep artificial neural network approach to maxillofacial prostheses coloration. J Prosthodont Res 2020;64:296-300.  Back to cited text no. 11
    
12.
Tessaro YV, Furuie SS, Nakamura DM. Objective color calibration for manufacturing facial prostheses. J Biomed Opt 2021;26:025002. [doi: 10.1117/1.JBO.26.2.025002].  Back to cited text no. 12
    
13.
Haykin SS. Neural Networks and Learning Machines. 3rd ed. Prentice Hall/Pearson: New York; 2009.  Back to cited text no. 13
    
14.
Dreiseitl S, Ohno-Machado L. Logistic regression and artificial neural network classification models: A methodology review. J Biomed Inform 2002;35:352-9.  Back to cited text no. 14
    
15.
Heiat A. Comparison of artificial neural network and regression models for estimating software development effort. Inf Softw Technol 2002;44:911-22.  Back to cited text no. 15
    
16.
Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 1994;5:157-66.  Back to cited text no. 16
    
17.
Rumelhart DE, Hinton GE, Williams RJ. Learning Internal Representations by Error Propagation. California Univ Inst for Cognitive Science 1985 (ICS Report 8506).  Back to cited text no. 17
    
18.
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997;9:1735-80.  Back to cited text no. 18
    
19.
Gulcehre C, Cho K, Pascanu R, Bengio Y. Learned-norm pooling for deep feedforward and recurrent neural networks. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg; 2014. p. 530-46.  Back to cited text no. 19
    
20.
Vaswani A, Brain G, Shazeer N, Parmar N, Uszkoreit J, Jones L, et al. Attention is all you need. Adv Neural Inform Process Syst 2017;30:5998-6008.  Back to cited text no. 20
    
21.
Benakatti VB, Nayakar RP, Anandhalli M. Machine learning for identification of dental implant systems based on shape – A descriptive study. J Indian Prosthodont Soc 2021;21:405-11.  Back to cited text no. 21
[PUBMED]  [Full text]  
22.
Xie X, Wang L, Wang A. Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod 2010;80:262-6.  Back to cited text no. 22
    
23.
Kim DW, Kim H, Nam W, Kim HJ, Cha IH. Machine learning to predict the occurrence of bisphosphonate-related osteonecrosis of the jaw associated with dental extraction: A preliminary report. Bone 2018;116:207-14.  Back to cited text no. 23
    
24.
Kurbiel T, Khaleghian S. Training of deep neural networks based on distance measures using RMSProp. arXiv 2017;arXiv:1708.01911.  Back to cited text no. 24
    
25.
Wong TT. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognit 2015;48:2839-46.  Back to cited text no. 25
    
26.
Jung Y. Multiple predicting K-fold cross-validation for model selection. J Nonparametric Stat 2018;30:197-215.  Back to cited text no. 26
    
27.
Iwana BK, Uchida S. An empirical survey of data augmentation for time series classification with neural networks. PLoS One 2021;16:e0254841.  Back to cited text no. 27
    
28.
Um TT, Pfister FM, Pichler D, Endo S, Lang M, Hirche S, et al. Data Augmentation of Wearable Sensor Data for Parkinson's Disease Monitoring Using Convolutional Neural Networks. Proceedings of the 19th ACM International Conference on Multimodal Interaction; 2017. p. 216-20.  Back to cited text no. 28
    
29.
GitHub-Uchidalab/Time_Series_Augmentation: An Example of Time Series Augmentation Methods with Keras. Available from: https://github.com/uchidalab/time_series_augmentation. [Last accessed on 2022 Mar 25].  Back to cited text no. 29
    
30.
Chai T, Draxler RR. Root mean square error (RMSE) or mean absolute error (MAE)? –Arguments against avoiding RMSE in the literature. Geosci Model Dev 2014;7:1247-50.  Back to cited text no. 30
    
31.
Willmott CJ, Ackleson SG, Davis RE, Feddema JJ, Klink KM, Legates DR, et al. Statistics for the evaluation and comparison of models. J Geophys Res 1985;90:8995-9005.  Back to cited text no. 31
    
32.
Tang J, Liu F, Zou Y, Zhang W, Wang Y. An improved fuzzy neural network for traffic speed prediction considering periodic characteristic. IEEE Trans Intell Transp Syst 2017;18:2340-50.  Back to cited text no. 32
    
33.
Chen TT, Lee SJ. A weighted LS-SVM based learning system for time series forecasting. Inf Sci 2015;299:99-116.  Back to cited text no. 33
    
34.
Karunasingha DS. Root mean square error or mean absolute error? Use their ratio as well. Inf Sci 2022;585:609-29.  Back to cited text no. 34
    
35.
Mohammed MI, Cadd B, Peart G, Gibson I. Augmented patient-specific facial prosthesis production using medical imaging modelling and 3D printing technologies for improved patient outcomes. Virtual Phys Prototyp 2018;13:164-76.  Back to cited text no. 35
    
36.
Unkovskiy A, Wahl E, Huettig F, Keutel C, Spintzyk S. Multimaterial 3D printing of a definitive silicone auricular prosthesis: An improved technique. J Prosthet Dent 2021;125:946-50.  Back to cited text no. 36
    
37.
Unkovskiy A, Spintzyk S, Brom J, Huettig F, Keutel C. Direct 3D printing of silicone facial prostheses: A preliminary experience in digital workflow. J Prosthet Dent 2018;120:303-8.  Back to cited text no. 37
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1], [Table 2], [Table 3]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Materials and Me...
Results
Discussion
Conclusions
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed362    
    Printed8    
    Emailed0    
    PDF Downloaded81    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]