Synthetic Image Data Generation via Rendering Techniques for Training AI-Based Instance Segmentation

Authors

  • Dickson Yik Cheng Kho Faculty of Manufacturing and Mechatronic Engineering Technology, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia
  • Norazlianie Sazali Faculty of Manufacturing and Mechatronic Engineering Technology, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia
  • Maurice Kettner Institut für Kälte-, Klima- und Umwelttechnik, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany
  • Christian Friedrich Department of Mechanical Engineering and Mechatronics, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany
  • Constantin Schempp Department of Mechanical Engineering and Mechatronics, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany
  • Naqib Salim Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany
  • Ismayuzri Ishak Faculty of Manufacturing and Mechatronic Engineering Technology, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia
  • Saiful Anwar Che Ghani Faculty of Mechanical and Automotive Engineering, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia

DOI:

https://doi.org/10.37934/araset.62.1.158169

Keywords:

Synthetic image data generation, Rendering techniques, BlenderProc, COCO annotations

Abstract

Synthetic image data generation has gained popularity in computer vision and machine learning in recent years. The work introduces a technique for creating artificial image data by utilizing 3D files and rendering methods in Python and Blender. The technique employs BlenderProc, a rendering tool for generating artificial images, to efficiently create a substantial amount of data. The output of the method is saved in JSON format, containing COCO annotations of objects in the images, facilitating seamless integration with current machine-learning pipelines. The paper shows that the created synthetic data can be used to enhance object data during simulation. The method can enhance the accuracy and robustness of machine-learning models by modifying simulation parameters like lighting, camera position, and object orientation to create a variety of images. This is especially beneficial for applications that require significant amounts of labelled real-world data, which can be time-consuming and labour-intensive to obtain. The study addresses the constraints and potential prejudices of creating synthetic data, emphasizing the significance of verifying and assessing the generated data prior to its utilization in machine learning models. Synthetic data generation can be a valuable tool for improving the efficiency and effectiveness of machine learning and computer vision applications. However, it is crucial to thoroughly assess the potential limitations and biases of the generated data. This paper emphasizes the potential of synthetic data generation to enhance the accuracy and resilience of machine learning models, especially in scenarios with limited access to labelled real-world data. This paper introduces a method that efficiently produces substantial amounts of synthetic image data with COCO annotations, serving as a valuable resource for professionals in computer vision and machine learning.

Downloads

Download data is not yet available.

Author Biographies

Dickson Yik Cheng Kho, Faculty of Manufacturing and Mechatronic Engineering Technology, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia

dkyikcheng@gmail.com

Norazlianie Sazali , Faculty of Manufacturing and Mechatronic Engineering Technology, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia

azlianie@umpsa.edu.my

Maurice Kettner, Institut für Kälte-, Klima- und Umwelttechnik, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany

maurice.Kettner@h-ka.de

Christian Friedrich, Department of Mechanical Engineering and Mechatronics, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany

christian.friedrich@h-ka.de

Constantin Schempp, Department of Mechanical Engineering and Mechatronics, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany

constantin.schempp@h-ka.de

Naqib Salim, Karlsruhe University of Applied Sciences, Moltkestraße 30, 76133 Karlsruhe, Germany

muhamad_naqib.md_salim@h-ka.de

Ismayuzri Ishak, Faculty of Manufacturing and Mechatronic Engineering Technology, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia

yuzriishak@umpsa.edu.my

Saiful Anwar Che Ghani, Faculty of Mechanical and Automotive Engineering, University Malaysia Pahang, 26600 Pekan, Pahang, Malaysia

anwarcg@umpsa.edu.my

Downloads

Published

2024-10-14

How to Cite

Kho, D. Y. C., Sazali , N., Kettner, M., Friedrich, C., Schempp, C., Salim, N., Ishak, I., & Che Ghani, S. A. (2024). Synthetic Image Data Generation via Rendering Techniques for Training AI-Based Instance Segmentation. Journal of Advanced Research in Applied Sciences and Engineering Technology, 158–169. https://doi.org/10.37934/araset.62.1.158169

Issue

Section

Articles

Most read articles by the same author(s)

1 2 > >>