Published 2026-05-05
abstract views: 0 // Full text article (PDF): 0
Keywords
- generative adversarial networks (GANs),
- realistic animation,
- motion quality analysis,
- texture and visual composition
How to Cite
Copyright (c) 2026 © 2026 Authors. Published by the University of Novi Sad, Faculty of Technical Sciences, Department of Graphic Engineering and Design. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license 4.0 Serbia

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
This study aims to explore the effectiveness of GANs in generating animations that achieve high levels of realism, focusing on motion quality, texture detail, visual composition, and frame-to-frame coherence. A qualitative approach was employed using ATLAS.ti to analyze outputs from models such as MoCo-GAN and StyleGAN3. The dataset included 110 animations, from which key visual elements were coded and analyzed thematically. The findings reveal that 68% of the animations demonstrated smooth motion transitions, while 20% exhibited jerky movements and 12% contained motion artifacts. Similarly, 70% of the animations featured highly detailed textures, but 20% had flat backgrounds, and 10% showed lighting inconsistencies. Visual compositions with strategic framing and depth perception were observed in 55% and 30% of the animations, respectively, whereas only 15% maintained symmetrical layouts. These results underscore the strengths and limitations of GANs in achieving realism, particularly in complex scenarios. This study contributes to the growing body of literature on GAN applications in animation by identifying critical visual factors that enhance aesthetic and narrative coherence. Practical implications include guiding designers and developers in leveraging GANs for high-quality animation production. Future research is recommended to address existing technical challenges and evaluate audience responses to GAN-generated animations, paving the way for more dynamic and engaging visual content.
Article history: Received (December 11, 2024); Revised (February 9, 2025); Accepted (October 15, 2025)
Dimensions Citation Metrics
References
- Alaluf, Y., Patashnik, O., Wu, Z., Zamir, A., Shechtman, E., Lischinski, D. & Cohen-Or, D. (2023) Third Time’s the Charm? Image and Video Editing with StyleGAN3. In: Karlinsky, L., Michaeli, T. and Nishino, K. (eds.) Computer Vision – ECCV 2022 Workshops, 23 – 27 October 2022, Tel Aviv, Israel. Cham, Springer. pp. 204–220. Available from: doi: 10.1007/978-3-031-25063-7_13
- Castleberry, A. & Nolen, A. (2018) Thematic analysis of qualitative research data: Is it as easy as it sounds? Currents in Pharmacy Teaching and Learning. 10 (6), 807–815. Available from: doi: 10.1016/J.CPTL.2018.03.019
- Chakraborty, T., Reddy K S, U., Naik, S. M., Panja, M. & Manvitha, B. (2024) Ten years of generative adversarial nets (GANs): a survey of the state-of-the-art. Machine Learning: Science and Technology. 5 (1), 011001. Available from: doi: 10.1088/2632-2153/AD1F77
- Che Azemin, M. Z., Mohd Tamrin, M. I., Hilmi, M. R. & Mohd Kamal, K. (2024) Assessing the Efficacy of StyleGAN 3 in Generating Realistic Medical Images with Limited Data Availability. In: Proceedings of the 2024 13th International Conference on Software and Computer Applications, ICSCA 2024, 1 – 3 February 2024, Bali Island, Indonesia. New York, Association for Computing Machinery. pp. 192–197. Available from: doi: 10.1145/3651781.3651810
- Chen, J., Liu, G. & Chen, X. (2020) AnimeGAN: A Novel Lightweight GAN for Photo Animation. In: International Symposium on Intelligence Computation and Applications, ISICA, 16 – 17 November 2019, Guangzhou, China. Singapore, Springer. pp. 242–256. Available from: doi: 10.1007/978-981-15-5577-0_18
- El-Nasr, M. S., Vasilakos, A., Rao, C. & Zupko, J. (2009) Dynamic Intelligent Lighting for Directing Visual Attention in Interactive 3-D Scenes. IEEE Transactions on Computational Intelligence and AI in Games. 1 (2), 145–153. Available from: doi: 10.1109/TCIAIG.2009.2024532
- Gao, J., Micheletto, M., Orrù, G., Concas, S., Feng, X., Marcialis, G. L. & Roli, F. (2024) Texture and artifact decomposition for improving generalization in deep-learning-based deepfake detection. Engineering Applications of Artificial Intelligence. 133, 108450. Available from: doi: 10.1016/j.engappai.2024.108450
- Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2014) Generative Adversarial Nets. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. and Weinberger, K. Q. (eds.) Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 8 – 13 December 2014, Montreal, Canada. pp. 2672–2680. Available from: https://proceedings.neurips.cc/paper/5423-generative-adversarial-nets [Accessed 23rd March 2026].
- Hamza, A. A. G. (2024) Realistic Shadows In Computer Graphics. ScienceOpen. [Preprint] Available from: doi: 10.14293/PR2199.000942.v1
- Hussain, S. A., Aslam, S., Silcock, B. W. & Ali, S. A. (2024) The visual narrative of Kashmir: Analysing conflict through social identity theory. Journal of Applied Journalism & Media Studies. Available from: doi: 10.1386/ajms_00155_1
- Islam, T., Miron, A., Liu, X. & Li, Y. (2024) Dynamic Fashion Video Synthesis from Static Imagery. Future Internet. 16 (8), 287. Available from: doi: 10.3390/fi16080287
- Jain, A. (2024) Generative Adversarial Networks: A Review of Developments and Diverse Applications. Authorea. [Preprint] Available from: doi: 10.22541/au.172979391.16488935/v1
- Kancharla, P. & Channappayya, S. S. (2018) Improving the Visual Quality of Generative Adversarial Network (GAN)-Generated Images Using the Multi-Scale Structural Similarity Index. In: 2018 25th IEEE International Conference on Image Processing, ICIP, 7–10 October 2018, Athens, Greece. New Jersey, IEEE. pp. 3908–3912. Available from: doi: 10.1109/ICIP.2018.8451296
- Kanuri, V. K., Hughes, C. & Hodges, B. T. (2024) Standing out from the crowd: When and why color complexity in social media images increases user engagement. International Journal of Research in Marketing. 41 (2), 174–193. Available from: doi: 10.1016/J.IJRESMAR.2023.08.007
- Kubiak, K. (2024) Design Thinking in Lighting Design to Meet User Needs. Sustainability. 16 (9), 3561. Available from: doi: 10.3390/su16093561
- Kumar, L. & Singh, D. K. (2023) Comparative analysis of Vid2Vid and Fast Vid2Vid Models for Video-to-Video Synthesis on Cityscapes Dataset. In: 2023 International Conference on Computer, Electronics & Electrical Engineering & Their Applications, IC2E3, 8–9 June 2023, Srinagar Garhwal, India. New Jersey, IEEE. pp. 660–664. Available from: doi: 10.1109/IC2E357697.2023.10262586
- Lee, S. H. & Leeghim, H. (2022) Synthetic Infra-Red Image Evaluation Methods by Structural Similarity Index Measures. Electronics. 11 (20), 3360. Available from: doi: 10.3390/electronics11203360
- Li, Z., Chen, L., Liu, C., Zhang, F., Li, Z., Gao, Y., Ha, Y., Xu, C., Quan, S. & Xu, Y. (2021) Animated 3D human avatars from a single image with GAN-based texture inference. Computers & Graphics. 95, 81–91. Available from: doi: 10.1016/j.cag.2021.01.002
- Liang, J., Fan, Y., Zhang, K., Timofte, R., Van Gool, L. & Ranjan, R. (2025) MoVideo: Motion-Aware Video Generation with Diffusion Model. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T. and Varol, G. (eds.) Computer Vision – ECCV 2024, 29 September–4 October 2024, Milan, Italy. Cham, Springer. pp. 56–74. Available from: doi: 10.1007/978-3-031-72784-9_4
- Mallya, A., Wang, T.-C., Sapra, K. & Liu, M.-Y. (2020) World-Consistent Video-to-Video Synthesis. In: Vedaldi, A., Bischof, H., Brox, T. and Frahm, J.-M. (eds.) Computer Vision – ECCV 2020, 23–28 August 2020, Glasgow, United Kingdom. Cham, Springer. pp. 359–378. Available from: doi: 10.1007/978-3-030-58598-3_22
- Manovich, L. (2016) Artistic Visualization. In: Paul, C. (ed.) A Companion to Digital Art. New Jersey, Wiley. pp. 426–444. Available from: doi: 10.1002/9781118475249.ch19
- Mathew, S. (2024) An Overview of Text to Visual Generation Using GAN. Indian Journal of Image Processing and Recognition. 4 (3), 1–9. Available from: doi: 10.54105/ijipr.A8041.04030424
- Purwanto, A., Kusrini, Utami, E. & Agustriawan, D. (2024) A Comprehensive Literature Review on Generative Adversarial Networks (GANs) for AI Anime Image Generation. In: 2024 IEEE International Conference on Artificial Intelligence and Mechatronics Systems, AIMS 2024, 22–23 February 2024, Virtual Conference. New Jersey, IEEE. pp. 1–6. Available from: doi: 10.1109/AIMS61812.2024.10513308
- Rakshitha, I., Nithin, U., Kareem, S. M. A., Rahul, V. N. S., Challa, N. P. & Naseeba, B. (2024) Anime Visage: Revealing Ingenuity with GAN-Assisted Character Development. In: 2024 International Conference on Expert Clouds and Applications, ICOECA, 18–19 April 2024, Bengaluru, India. New Jersey, IEEE. pp. 799–805. Available from: doi: 10.1109/ICOECA62351.2024.00142
- Re, S., Li, J., Li, Y. & Mao, J. (2022) Improved GAN Model for Image Animation. In: 2022 IEEE 5th International Conference on Information Systems and Computer Aided Education, ICISCAE 2022, 23–25 September 2022, Dalian, China. New Jersey, IEEE. pp. 838–842. Available from: doi: 10.1109/ICISCAE55891.2022.9927600
- Sangapu, S. C., Manogna, S. V. S., Sountharrajan, S. & Suganya, E. (2024) Enhancing Cartoonification using GAN Learning. In: 2024 IEEE International Conference for Women in Innovation, Technology & Entrepreneurship, ICWITE, 16–17 February 2024, Bangalore, India. New Jersey, IEEE. pp. 155–161. Available from: doi: 10.1109/ICWITE59797.2024.10502522
- Sedkaoui, S. & Benaichouba, R. (2024) Generative AI as a transformative force for innovation: a review of opportunities, applications and challenges. European Journal of Innovation Management. Available from: doi: 10.1108/EJIM-02-2024-0129
- Singh, J., Islam, S. M. N., Tewatia, M., Garg, D. & Fatima, N. (2024) Systematic Review of GAN for Enhancing Efficiency in AI in Gaming. In: 2024 International Conference on Advances in Computing Research on Science Engineering and Technology, ACROSET 2024, 27–28 September 2024, Indore, India. New Jersey, IEEE. pp. 1–8. Available from: doi: 10.1109/ACROSET62108.2024.10743943
- Tian, X. & Li, C. (2023) Augmented Reality Animation Image Information Extraction and Modeling Based on Generative Adversarial Network. Computer- Aided Design and Applications. 77–91. Available from: doi: 10.14733/cadaps.2024.S3.77-91
- Tran, T.-H., Bach, V.-D. & Doan, H.-G. (2020) vi-Mo-CoGAN: A Variant of MoCoGAN for Video Generation of Human Hand Gestures Under Different Viewpoints. In: Cree, M., Huang, F., Yuan, J. and Yan, W. (eds.) Pattern Recognition, ACPR 2019 Workshops, 26 November 2019, Auckland, New Zealand. Singapore, Springer. pp. 110–123. Available from: doi: 10.1007/978-981-15-3651-9_11
- Ursegov, S., Zakharian, A. & Miklina, O. (2022) Adaptive production forecast - a key element in petroleum reservoir digital transformation. In: Second EAGE Digitalization Conference and Exhibition, 23 March 2022, Vienna, Austria. Bunnik, European Association of Geoscientists & Engineers. pp. 1–5. Available from: doi: 10.3997/2214-4609.202239069
- Vaismoradi, M. & Snelgrove, S. (2019) Theme in Qualitative Content Analysis and Thematic Analysis. Forum Qualitative Sozialforschung - Forum: Qualitative Social Research. 20 (3), 23. Available from: doi: 10.17169/fqs-20.3.3376
- Vecchio, G., Martin, R., Roullier, A., Kaiser, A., Rouffet, R., Deschaintre, V. & Boubekeur, T. (2024) ControlMat: A Controlled Generative Approach to Material Capture. ACM Transactions on Graphics. 43 (5), 1–17. Available from: doi: 10.1145/3688830
- Zhang, W. (2023) Animation Scene Design and Machine Vision Rendering Optimization Combining Generative Models. Computer-Aided Design and Applications. 1–15. Available from: doi: 10.14733/cadaps.2024.S15.1-15
- Zhuo, L., Wang, G., Li, S., Wu, W. & Liu, Z. (2022) Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis. In: Karlinsky, L., Michaeli, T. and Nishino, K. (eds.) Computer Vision – ECCV 2022 Workshops, 23–27 October 2022, Tel Aviv, Israel. Cham, Springer. pp. 289–305. Available from: doi: 10.1007/978-3-031-19784-0_17
- Zhuo, L., Wang, G., Li, S., Wu, W. & Liu, Z. (2024) Fast-Vid2Vid++: Spatial-Temporal Distillation for Real-Time Video-to-Video Synthesis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 46 (12), 10732–10747. Available from: doi: 10.1109/TPAMI.2024.3450630
