Die Einführung des VIVO-Systems an der HTWD befindet sich derzeit in der Testphase. Daher kann es noch zu anwendungsseitigen Fehlern kommen. Sollten Sie solche Fehler bemerken, können Sie diese gerne >>hier<< melden.
Sollten Sie dieses Fenster schließen, können Sie über die Schaltfläche "Feedback" in der Fußleiste weiterhin Meldungen abgeben.
Vielen Dank für Ihre Unterstützung!
YuMi-Chess: Applying Synthetic Image Generation in Real World Scenarios
Tagungsband
In this paper, we present a comprehensive approach to utilizing synthetic image data for computer vision tasks in real-world industrial applications. Specifically, we focus on an ABB vision-guided industrial robot designed to play chess, which exemplifies many pick-and-place tasks in industrial environments. Today's vision systems in robotics, which rely heavily on convolutional neural networks (CNNs), have achieved high accuracy but require extensive real-world data for training, which is often resource-intensive to collect. To mitigate this, we leverage synthetic data generation, which replicates the visual diversity of actual industrial environments, thereby enhancing model generalization while reducing dependency on real data collection. Our work builds on previous research by extending capabilities to not only recognize chess pieces but also determine their spatial positions within the robot's coordinate space, handle unusual edge cases, like fallen pieces and calibrating methods for the chessboard. We integrate a advanced chess engine to improve decision-making in the robot's chess-playing abilities. This study demonstrates that synthetic data can significantly streamline the development and deployment of robust vision-guided robotic applications, providing a scalable solution for various industrial tasks.