Final Project Report – Easy Drawing Animation

Ryo Suzuki (http://ryosuzuki.org/)
University of Colorado Boulder
Department of Computer Science
demo-1.gif

 

demo-2.gif

1. Introduction: 

Character animation is one of the essential media to convey ideas, promote products, and tell a story. Despite these benefits, creating drawing animations still remains challenging. In a traditional cartoon animation, artists need to draw each animation frame by hand. Although computer-aided animation tools (e.g. Adobe After Effects) can help the manual efforts, this process is still difficult and time-consuming even for experienced users [Xing 2015]. Such a drawback limits the usage of character animation. For example, easier animation tools can facilitate a prototyping process of a story creation or engage children’s creativity while these use scenarios are not fully explored by the current animation tools.

Prior work have proposed alternative approaches for animation authoring through shape deformation [Igarashi 2005], kinetic texture [Kazi 2014], and autocomplete frame-by-frame animations [Xing 2015]. However, the existing tools require the users to specify the movements by mouse, and users need to handle many parameters once they want to create complex animation or replicate real-world movements such as body gesture or facial expression. This becomes easily intractable even for professional users [Sýkora 2009, Igarashi 2005]. As recent work has demonstrated the benefits of the use of real-world movements as a source motion [Thies 2016], we hypothesize that gesture-based animation authoring can significantly reduce the users’ manual efforts on creating complex character animation.

In this paper, we explore the motion-based hand drawn character animation by capturing the users’ real-world movements. In the design process, users simply draw a character or import vector images to the workspace. Our system detects user’s facial expression with a web camera or a body gesture with Kinect. Then, the system automatically converts these input movements to the character animation by performing real-time deformation. In the following section, we describe the implementation detail of our prototype system.

 

2. Implementation:

2-1. Animation with Facial Expression:

First, we propose animation authoring tools based on the real-time facial expression. In the design process, the system first allows users to draw a facial character. After finishing the drawing process, the system captures the user’s face with a web camera and performs face recognition. The system automatically animates the target character based on the position of each part (e.g. eye, eyebrow, nose, and mouth). The followings describe the detail of each step.
Tech for Sport and Play.001.jpeg
Sketch beautification:
In the sketch beautification, the system first recognizes a user’s stroke as Bezier path. Based on the Bezier path, the system categorizes three common shapes; lines, arcs, and circles by calculating distances between segments, and angles between tangents of the path. To detect the difference between arc and circle (or eclipse), the system also calculates the total angles and if the span is close to 2π (=6.28), then the arc is replaced with a full circle. If we apply this method to more complex beautification such as connection, line alignment, and perpendicular detection, the search space will be too big to compute. To avoid this computation problem, we adapted to the basic algorithm discussed in [Cheema 2012, Fišer 2015, Igarashi 1997] to reduce the computational cost by search graph pruning.
Face recognition:
To detect face expression, we adapt Saragih et al’s algorithm [Saragih, 2012]. This technique provides roughly 70 line segments that express the facial emotion, so based on the position of these lines, we create an animation. In the current prototype, we do not implement automatic detection of the sketch. Instead, we arbitrarily label the parts of the face based on the position and shape. This will be improved in the future implementation.

 

2-2. Gesture-based Character Animation

Next, we describe a gesture-based character animation.
Tech for Sport and Play.002.jpeg
Gesture detection with Kinect: 
While we originally intended to capture the skeleton using web camera, we found that it is still technically difficult (not impossible though) even with the state of the art computer vision technology. Therefore, we decided to use Kinect to capture the motion gesture.
Kinect does not allow Mac users to access Microsoft official SDK, thus, we instead use opensourced libfreenect library (https://github.com/OpenKinect/libfreenect) and its wrapper for Processing. Since this libfreenect only supports the low-level depth sensor data without skeleton information, we detect hand gesture movements based on the depth data. The data are sent to the Node.js server with websockets in real-time.
As-rigid-as possible surface deformation 
Once we obtain the gesture information, the system deforms the characters provided as users’ drawings or vector images. To deform the arbitrary 2D characters, we adapt to as-rigid-as-possible surface deformation algorithm.
Screen Shot 2016-05-02 at 1.45.49 PM.png
First, the system performs triangulation of 2D drawing the mesh. In this process, we take user strokes as input and produce 2D triangulation mesh as output. We first simplify the stroke as a series of Bezier paths and extract vertices of each connected paths. After that, we create triangles with Constrained Delaunay triangulation algorithm [Chew, 1989].
Screen Shot 2016-05-02 at 1.45.59 PM.png
Next, we deform each inner triangle by minimizing the distortion of triangle meshes [Igarashi 2005, Sorkine 2007]. The energy function can be described as the follows.
Screen Shot 2016-05-02 at 1.40.04 PM.png
where x_i and a_t represent the vertices and the area of triangle t ∈ T respectively, R_t is the rotation of triangle, and w_ij is the weight parameter of vertices i and j. The system minimizes this energy function to get the target vertex positions. The result can be seen in the video.

 

3. Conclusion: 

In this paper, we explore the character animation based on the real-world movements such as a facial expression and body gesture. To demonstrate this idea, we implement two proof-of-concept prototypes. While our current prototype can be significantly improved in the future implementation, we believe our system can expand the possible usage of the character animation in the various applications.

Resources:

Original concept video: https://youtu.be/vBhn3s5IDWA

References:

  1. Cheema, Salman, Sumit Gulwani, and Joseph LaViola. “QuickDraw: improving drawing experience for geometric diagrams.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1037-1064. ACM, 2012.
  2. Chew, L. Paul. “Constrained Delaunay triangulations.” Algorithmica 4, no. 1-4 (1989): 97-108.
  3. Fišer, Jakub, Paul Asente, and Daniel Sýkora. “ShipShape: a drawing beautification assistant.” In Proceedings of the workshop on Sketch-Based Interfaces and Modeling, pp. 49-57. Eurographics Association, 2015.
  4. Igarashi, Takeo, Tomer Moscovich, and John F. Hughes. “As-rigid-as-possible shape manipulation.” ACM transactions on Graphics (TOG) 24, no. 3 (2005): 1134-1141.
  5. Igarashi, Takeo, Tomer Moscovich, and John F. Hughes. “Spatial keyframing for performance-driven animation.” In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 107-115. ACM, 2005.
  6. Igarashi, Takeo, Satoshi Matsuoka, Sachiko Kawachiya, and Hidehiko Tanaka. “Interactive beautification: a technique for rapid geometric design.” In Proceedings of the 10th annual ACM symposium on User interface software and technology, pp. 105-114. ACM, 1997.
  7. Kazi, Rubaiat Habib, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. “Draco: bringing life to illustrations with kinetic textures.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 351-360. ACM, 2014.
  8. Saragih, Jason M., Simon Lucey, and Jeffrey F. Cohn. “Deformable model fitting by regularized landmark mean-shift.” International Journal of Computer Vision 91, no. 2 (2011): 200-215.
  9. Sorkine, Olga, and Marc Alexa. “As-rigid-as-possible surface modeling.” In Symposium on Geometry processing, vol. 4. 2007.
  10. Sýkora, Daniel, John Dingliana, and Steven Collins. “As-rigid-as-possible image registration for hand-drawn cartoon animations.” In Proceedings of the 7th International Symposium on Non-Photorealistic Animation and Rendering, pp. 25-33. ACM, 2009.
  11. Thies, Justus, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. “Face2face: Real-time face capture and reenactment of RGB videos.” Proc. Computer Vision and Pattern Recognition (CVPR), IEEE 1 (2016).
  12. Xing, Jun, Li-Yi Wei, Takaaki Shiratori, and Koji Yatani. “Autocomplete hand-drawn animations.” ACM Transactions on Graphics (TOG) 34, no. 6 (2015): 169.
Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s