z-logo
open-access-imgOpen Access
Using Deep Learning for the Image Recognition of Motifs on the Center of Sukhothai Ceramics
Author(s) -
Orawan Chaowalit,
Pitikan Kuntitan
Publication year - 2021
Publication title -
current applied science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.14
H-Index - 3
ISSN - 2586-9396
DOI - 10.55003/cast.2022.02.22.002
Subject(s) - convolutional neural network , motif (music) , ceramic , artificial intelligence , computer science , visual arts , engineering , art , aesthetics , materials science , composite material
The motifs on the center of Sukhothai ceramics are essential elements for determining the age of the ceramics. Sukhothai ceramics in each kiln were made with different pattern production techniques, and thus one specific pattern appears only in a particular kiln. Thus, archaeologists can determine which ceramic was produced from which particular kiln site by investigating its motif. Motif identification requires a well-experienced expert to identify the tracery of the pattern on the center of a ceramic. Thus, identifying such archaeological evidence is complex even for general archaeologists. The aim of this research was to study the use of deep convolutional neural networks for classifying seven motif patterns on the center of Sukhothai ceramics (i.e. Chrysanthemum bouquet, Classic scroll, Conch shell, Fish pattern, Flower head pattern, Printed Chrysanthemum head, and Tibetan Buddhist vajra ). We collected a new dataset, including 557 images of ceramics, from two kiln sites. Each ceramic’s motif was labeled by Thai ceramic experts. The collection of the motifs on the center of the Sukhothai ceramic dataset was called CMC Sukhothai Ceramic Dataset. The efficiency of the motif identification on the center of Sukhothai ceramics was evaluated by comparing five pretrained convolutional neural network models: DenseNet121, InceptionV3, VGG16, GoogLeNet, and AlexNet. Then, the models that were efficient for our dataset were selected and trained by fine tuning. Results showed that the motif recognition of VGG16 + our classification layers generated the best efficiency at 500 epochs of training and 86.54% of accuracy in the test dataset.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here