A robot testbed for writing Chinese and Japanese calligraphy characters is presented. Single strokes of the calligraphy characters are represented in a database and initialized with a scanned reference image and a manually chosen initial drawing spline. A learning procedure uses visual feedback to analyze each new iteration of the drawn stroke and updates the drawing spline such that every subsequent drawn stroke becomes more similar to the reference image. The learning procedure can be performed either in simulation, using a simple brush model to create simulated images of the strokes, or with a real robot arm equipped with a calligraphy brush and a camera that captures images of the drawn strokes. Results from both simulations and experiments with the robot arm are presented.
展开▼