z-logo
open-access-imgOpen Access
Deep Learning-Driven Craft Design: Integrating AI into Traditional Handicraft Creation
Author(s) -
Yi Chen,
Yufeng Lou
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3614185
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
In the growing field of human-centric artificial intelligence, the integration of computational methods with artistic and cultural practices is emerging as a promising area of exploration. Traditional design workflows often depend on manual expertise and fixed patterns, which restrict flexibility, personalization, and large-scale application. Earlier computational models also face difficulties in capturing both the semantic depth and visual detail needed for creative outputs that are truly adaptive and authentic. To address these limitations, we present a new multi-module framework built on a series of deep learning components. This includes a generative adversarial network enhanced with feature injection, an aesthetic-aware filtering mechanism, and a texture refinement unit that adapts to dynamic inputs. The framework is further strengthened by a dual encoding strategy that aligns visual style with contextual meaning across different design themes. Experimental results on both standard benchmarks and custom-built datasets show that our method achieves better visual consistency and thematic accuracy than existing approaches. This progress not only supports automated content generation with cultural relevance but also contributes to the broader goal of developing context-aware and interdisciplinary artificial intelligence systems. Experimental evaluations were conducted on four benchmark datasets: WikiArt, Textile Pattern, Behance Artistic Media, and iMaterialist, encompassing tasks related to cultural pattern recognition, process failure prediction, and load balancing. On theWikiArt dataset, our method achieved an accuracy of 91.74%, an F1 score of 89.21%, and an AUC of 93.07%, outperforming established baselines such as CLIP and BLIP by margins of 3-4% in key metrics. Similarly, on the Textile Pattern dataset, the model reached 90.32% accuracy with an AUC of 90.61%, demonstrating its effectiveness in handling complex temporal and spatial correlations in handcrafted weaving data. In the Behance Artistic Media dataset, which focuses on failure prediction in creative workflows, our model achieved an F1 score of 87.15% and an AUC of 90.28%, surpassing competing models in robustness against imbalanced data scenarios. Finally, on the iMaterialist dataset related to production load balancing, the model recorded 91.26% accuracy and 92.17% AUC. These quantitative outcomes substantiate the proposed framework’s capacity to deliver superior visual consistency, thematic alignment, and predictive reliability across diverse and culturally enriched datasets. The integration of multi-scale attention mechanisms and culturally annotated training corpora directly contributes to these results, highlighting the framework’s potential in advancing AI-assisted craft design through quantifiable performance gains.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom