Character animation is one of the essential media to convey ideas, promote products, and tell a story. Despite these benefits, creating drawing animations still remains challenging. In a traditional cartoon animation, artists need to draw each animation frame by hand. Although computer-aided animation tools (e.g. Adobe After Effects) can help the manual efforts, this process is still difficult and time-consuming even for experienced users [Xing 2015]. Such a drawback limits the usage of character animation. For example, easier animation tools can facilitate a prototyping process of a story creation or engage children’s creativity while these use scenarios are not fully explored by the current animation tools.
Prior work have proposed alternative approaches for animation authoring through shape deformation [Igarashi 2005], kinetic texture [Kazi 2014], and autocomplete frame-by-frame animations [Xing 2015]. However, the existing tools require the users to specify the movements by mouse, and users need to handle many parameters once they want to create complex animation or replicate real-world movements such as body gesture or facial expression. This becomes easily intractable even for professional users [Sýkora 2009, Igarashi 2005]. As recent work has demonstrated the benefits of the use of real-world movements as a source motion [Thies 2016], we hypothesize that gesture-based animation authoring can significantly reduce the users’ manual efforts on creating complex character animation.
In this paper, we explore the motion-based hand drawn character animation by capturing the users’ real-world movements. In the design process, users simply draw a character or import vector images to the workspace. Our system detects user’s facial expression with a web camera or a body gesture with Kinect. Then, the system automatically converts these input movements to the character animation by performing real-time deformation. In the following section, we describe the implementation detail of our prototype system.
2-1. Animation with Facial Expression:
2-2. Gesture-based Character Animation
- Cheema, Salman, Sumit Gulwani, and Joseph LaViola. “QuickDraw: improving drawing experience for geometric diagrams.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1037-1064. ACM, 2012.
- Chew, L. Paul. “Constrained Delaunay triangulations.” Algorithmica 4, no. 1-4 (1989): 97-108.
- Fišer, Jakub, Paul Asente, and Daniel Sýkora. “ShipShape: a drawing beautification assistant.” In Proceedings of the workshop on Sketch-Based Interfaces and Modeling, pp. 49-57. Eurographics Association, 2015.
- Igarashi, Takeo, Tomer Moscovich, and John F. Hughes. “As-rigid-as-possible shape manipulation.” ACM transactions on Graphics (TOG) 24, no. 3 (2005): 1134-1141.
- Igarashi, Takeo, Tomer Moscovich, and John F. Hughes. “Spatial keyframing for performance-driven animation.” In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 107-115. ACM, 2005.
- Igarashi, Takeo, Satoshi Matsuoka, Sachiko Kawachiya, and Hidehiko Tanaka. “Interactive beautification: a technique for rapid geometric design.” In Proceedings of the 10th annual ACM symposium on User interface software and technology, pp. 105-114. ACM, 1997.
- Kazi, Rubaiat Habib, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. “Draco: bringing life to illustrations with kinetic textures.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 351-360. ACM, 2014.
- Saragih, Jason M., Simon Lucey, and Jeffrey F. Cohn. “Deformable model fitting by regularized landmark mean-shift.” International Journal of Computer Vision 91, no. 2 (2011): 200-215.
- Sorkine, Olga, and Marc Alexa. “As-rigid-as-possible surface modeling.” In Symposium on Geometry processing, vol. 4. 2007.
- Sýkora, Daniel, John Dingliana, and Steven Collins. “As-rigid-as-possible image registration for hand-drawn cartoon animations.” In Proceedings of the 7th International Symposium on Non-Photorealistic Animation and Rendering, pp. 25-33. ACM, 2009.
- Thies, Justus, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. “Face2face: Real-time face capture and reenactment of RGB videos.” Proc. Computer Vision and Pattern Recognition (CVPR), IEEE 1 (2016).
- Xing, Jun, Li-Yi Wei, Takaaki Shiratori, and Koji Yatani. “Autocomplete hand-drawn animations.” ACM Transactions on Graphics (TOG) 34, no. 6 (2015): 169.