
Deep learning‐based prototyping of android GUI from hand‐drawn mockups
Author(s) -
Abdelhamid Abdelaziz A.,
Alotaibi Sultan R.,
Mousa Abdelaziz
Publication year - 2020
Publication title -
iet software
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.305
H-Index - 43
eISSN - 1751-8814
pISSN - 1751-8806
DOI - 10.1049/iet-sen.2019.0378
Subject(s) - graphical user interface , computer science , android (operating system) , software , process (computing) , programming language , human–computer interaction , engineering drawing , operating system , engineering
Recently, transforming graphical user interface (GUI) mockups into code becomes a common challenging practice for current software developers. However, this transformation usually takes time especially when GUI changes keep pace with evolutionary features. There are many studies admitted this challenge and presented solutions in terms of computer‐based GUI mockups. However, there is a research gap in this kind of research as very few of them adopted hand‐drawn mockups as an input. In this study, the authors employed YOLOv5 is a fast and accurate deep learning framework to automate the process of converting hand‐drawn GUI mockups into Android‐based GUI prototype. The process starts with detecting all GUI mockups in an input image and determining their bounding boxes, classifying these mockups into their corresponding GUI objects, then finally aligning these objects together to form the output prototype based on the layout presented in the input image. Experimental results show the effectiveness of the proposed approach in generating a visually appealing Android GUI from hand‐drawn mockups with a recognition accuracy of 98.54% when tested on various hand‐drawn GUI structures designed by five developers.