NYU CAT NYU CAT NYU CAT
NYU CAT
NYU CAT *
Events Archive
NYU CAT
NYU CAT
*
  News
  Information
  Technologies
and Projects
  Research
Collaborators
  Industry
Partners
  People
  Contact
  Home
  *
NYU CAT
*
Talk
April 2004

Friday, April 2nd 3p.m.
719 Broadway, 12th Floor Large Conference Room

High Dynamic Range Video

Sing Bing Kang
Microsoft Research


Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. In this talk, I will describe our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. I will show examples for a variety of dynamic scenes, and will dwell a bit on the specific application of a virtual walkthrough. I will also describe how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.

This talk is based on joint work with Matthew Uyttendaele, Simon Winder, and Richard Szeliski, and was presented at SIGGRAPH'03.

Sing Bing Kang received his Ph.D. in robotics from CMU in 1994. He is currently a researcher at Microsoft Corporation working on environment modeling from images. His paper on the Complex Extended Gaussian Image won the IEEE Computer Society Outstanding Paper award at CVPR'91. His IEEE Transactions on Robotics and Automation paper on human-to-robot hand mapping was awarded the 1997 King-Sun Fu Memorial Best Transaction Paper award. Sing Bing has published about 20 refereed journal papers and about 45 refereed conference papers, mostly on stereo and image-based rendering. He has also co-edited two books: one on panoramic vision (published by Springer in 2001), and another on "Emerging Topics in Computer Vision" (to appear in May 2004).
*
* *
*
Special CAT Colloquium, Dr. Eric Petajan
March 2004

Thursday, March 4th 2pm
719 Broadway, 12th Floor Large Conference Room

Ubiquitous Animated Characters: Removing the authoring and bandwidth bottlenecks with performance capture and MPEG-4

Dr. Eric Petajan
Founder and Chief Scientist
face2face animation, inc


The demand for high quality animated content has produced thriving video game and CG feature film industries. Simultaneously, the entertainment industry is looking for new ways to reach the consumer wherever they are located. Given the current wireless bandwidth limitations and Moore's Law, low bit-rate delivery of animated content provides the best way to entertain the consumer on mobile devices. The bit-rate needed to animate a typical graphical or object-based scene is orders of magnitude less than the compressed video bit-rate of the rendered scene. The development of MPEG-4 animation coding is motivated by the need for high quality visual communication at low bit-rates coupled with low-cost graphics rendering systems. MPEG-4 contains a comprehensive set of tools for representing and compressing content objects and the animation of those objects. Virtual humans (faces and bodies) are treated as a special type of object in MPEG-4 with anatomically specific locations and animation parameters specified in the standard. While virtual humans can be treated as generic graphical objects, there are particular advantages to representing them with the Face and Body Animation (FBA) Coding specification. A facial motion capture system has been developed which transfers the lip and head motion from ordinary video of a real human to an animated character using only the MPEG-4 Face Animation Parameters (FAPs). The perception of human speech incorporates both acoustic and visual communication modalities. Automatic speech recognition (ASR) systems have traditionally processed only the acoustic signal. While video cameras and video acquisition systems have become economical, the use of automatic lipreading to enhance speech recognition performance is an ongoing and fruitful research topic. During the last 20 years a variety of research systems have been developed which demonstrate that visual speech information enhances overall recognition accuracy, especially in the presence of acoustic noise. The performance of all of these systems could be greatly improved by better video feature acquisition or better acoustic/visual recognition integration methods or both. face2face animation has developed a facial motion capture system for the efficient authoring of high quality animated characters. The animation quality is high enough for broadcast television (HBO) and the bit-rate is low enough for communication over mobile or dialup networks. face2face has recently created the first 3D talking characters on a mobile phone. This talk will cover the MPEG-4 standard and the many options for delivering high quality talking characters to the consumer.
*
* *
*
Special CAT Presentation: The Destruction of the Schwartzes
March 2004

Thursday, March 25th, 2004. 6 pm
719 Broadway, 12th Floor Large Conference Room

Please join us for a viewing of Part I of
The Destruction of the Schwartzes

The Holocaust memoirs of Jack Schwartz's cousin Tibor Schwartz, one of 3 survivors of about 30 relatives on Jack Schwartz's father's side. The film covers the period from 1944-1946. This is one of the thousands of interviews taped by the Spielberg Shoah Foundation.
*
* *
*
Siggraph 2003 Electronic Theater Showing
February 2004

Monday, February 9th 7:00-9:00 p.m.
NYU's Kimmel Center at 60 Washington Square


On Monday, February 9 from 7:00PM to 9:00PM, NYC ACM SIGGRAPH is very proud to present the SIGGRAPH 2003 Electronic Theater. This screening will be held at NYU's Kimmel Center at 60 Washington Square South in the Village.

The SIGGRAPH 2003 Electronic Theater is part of the world's most prestigious film and video extravaganza, showcasing dazzling and innovative imagery in invited and submitted works selected by a distinguished jury of computer graphics experts and specialists. As part of the Computer Animation Festival, the SIGGRAPH 2003 Electronic Theater is internationally recognized and lauded as an event that serves to engage and inspire artists, scientists, engineers, designers, and students to harness the power of the digital image to explore the boundlessness of imagination.

NYC ACM SIGGRAPH is very proud to have SIGGRAPH 2004 Computer Animation Festival (CAF) Chair Chris Bregler of NYU as our guest host for the evening. Chris will also be sharing some insights into how the CAF works. You can find out more information about this year's conference, SIGGRAPH 2004, at www.siggraph.org/s2004.

As always, this event is free for all NYC ACM SIGGRAPH members. It is also free for NYU faculty, staff and students. It is $7.00 for non-members and $3.00 for non-member students (with a current, valid student ID). NYC ACM SIGGRAPH wishes to thank NYU for its generous support of this event. More information about the presentation and directions to the Kimmel Center can be found on our web site at nyc.siggraph.org
*
* *
*
Special CAT Colloquium: Shelley Page
February 2004

Friday, February 13, 2 pm
719 Broadway, 12th Floor Large Conference Room

Ms. Page will be speaking about Shrek 2 and other new shows from DreamWorks. She will also be showing highlights from Imagina, and other cool stuff from around Europe--short films and commercials, and also some of the particle work shown at Imagina by Bruce Glass.
*
* *
*
Special CAT Colloquium
February 2004

Wednesday February 25th, 2004: 2:30 - 3:30 p.m.
719 Broadway, 12th Floor Large Conference Room

Serge J. Belongie
Department of Computer Science and Engineering
University of California, San Diego

Lecture: Three Brown Mice: See How They Run -- Monitoring Rodent Behavior in the Smart Vivarium

We address the problem of tracking multiple, identical, nonrigid moving targets through occlusion for purposes of rodent surveillance from a side view. Automated behavior analysis of individual mice promises to improve animal care and data collection in medical research. In our experiments, we consider the case of three brown mice that repeatedly occlude one another and have no stable trackable features. Our proposed algorithm computes and incorporates a hint of the future location of the target into layer-based affine optical flow estimation. The hint is based on the estimated correspondences between mice in different frames derived from a depth ordering heuristic. Our approach is simple, efficient, and does not require a manually constructed mouse template. We demonstrate encouraging results on a challenging test sequence containing multiple instances of severe occlusion. (This is joint work with Kristin Branson and Vincent Rabaud.)

*
* *
*
Email
For additional information, contact: info@cat.nyu.edu


*
*
*
*
*
*
*
*
*
January
*
*
February
*
*
March
*
*
April
*
*
*
*
*
*
 
*
*
*
Nystar
*
NYU CAT