Optimization of Dual-Constrained Grasping Operation of Robotic Arm Based on Optimized Contact Grasping Network
Abstract
With the continuous improvement of intelligent manufacturing and robot perception capabilities, traditional robotic arm grasping methods still face problems such as inaccurate posture prediction and poor task adaptability in dealing with dynamic scenes, complex objects, and functional action execution. Therefore, the research develops an optimized contact grasping network model that integrates scene constraints and task constraints. It combines UR5 six degree of freedom robotic arm, visual input, and Contact-GraspNet architecture based on point cloud, and introduces PointNet++ local feature enhancement mechanism and lightweight encoder design to effectively improve the spatial perception and action planning capabilities of grasping points. Experimental results show that on the GraspNet and YCB Dataset, the model achieves F1 scores of 92.54% and 91.82% respectively, with average execution time reduced to 0.61 seconds. In functional operation scenarios involving door handles, kettle handles, and drawer pulls, grasping accuracy remained above 0.89, with task completion rates significantly outperforming mainstream baseline models. Under visual interference conditions with up to 75% occlusion rate, the average inference latency was controlled within 0.82 seconds. Under varying light intensities, the pose angle error remained within the range of 1.21° to 1.87°. Therefore, this model exhibits comprehensive advantages in grasping precision, latency control, and deployment efficiency, and has the potential to be largely applied in industrial, service, and special task environments.

