NYU CAT NYU CAT NYU CAT
NYU CAT
NYU CAT *
Events Archive
NYU CAT
NYU CAT
*
  News
  Information
  Technologies
and Projects
  Research
Collaborators
  Industry
Partners
  People
  Contact
  Home
  *
NYU CAT
*
Tuesday December 3rd, 2002 3:00 p.m.
719 Broadway 12th floor
Small Conference Room

Niall Winters is a member of the Everyday Learning Group at Media Lab Europe located in Dublin, Ireland. Currently,his research focuses on (a) designing learning environments for students/children to explore abstract concepts (object-orientated data structures, multivariable systems etc.) and (b) developing "core" technologies focused on humans requirements. www.mle.media.mit.edu/~niall

*
* *
*
Wednesday December 4th, 2002 11:00 a.m.
719 Broadway 12th floor
Small Conference Room

In this talk I will present two practical techniques for simulating subsurface scattering and for rendering translucent materials. The first technique uses photon tracing and photon mapping to simulate both single scattering and multiple scattering inside the material. The photon mapping algorithm is significantly faster than other Monte Carlo based methods, but it becomes costly for highly scattering materials such as milk and skin. This observation has resulted in the development of a new technique based on a diffusion approximation. The diffusion approximation is faster (by several orders of magnitude) than previous approaches for rendering translucent materials and it is the first theory that extends the traditional point based reflection model (BRDF) paradigm in computer graphics and uses a BSSRDF - Bidirectional Scattering Surface Reflectance Distribution Function. In addition, the theory is sufficiently accurate that it can be used to measure the scattering properties of translucent materials. I will show several rendered animations and images of translucent materials including marble, milk, and skin, that were simulated using these techniques.

Bio:
Dr. Henrik Wann Jensen is an assistant professor at UCSD where he is working in the computer graphics group on realistic image synthesis, global illumination, and appearance modeling. His contributions to computer graphics include the photon mapping algorithm for global illumination, and the first BSSRDF for simulating subsurface scattering in translucent materials. He is the author of "Realistic Image Synthesis using Photon Mapping", AK Peters 2001. Prior to coming to UCSD in 2002, he was a research associate at Stanford from 1999-2002, a postdoctoral researcher at MIT, and a research scientist in industry working on commercial rendering software. He received his M.Sc. and Ph.D. in Computer Science from the Technical University of Denmark for developing the photon mapping method.
http://graphics.ucsd.edu/~henrik

*
* *
*
Friday December 6th, 2002 11:30 a.m.
Room 1302
WWH 251 Mercer Street
New York, NY 10012-1185

Speaker: Thomas Funkhouser, Princeton University
Title: A Search Engine for 3D Models

Abstract: As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them (e.g., a Google for 3D models). Unfortunately, traditional text-based search techniques are not always effective for 3D data. In this talk, we investigate new shape-based search methods. A key challenge is to find a computational representation of shape (a "shape descriptor") that is concise, robust, quick to compute, efficient to match, and discriminating between similar and dissimilar shapes.

In this talk, I will describe shape descriptors designed for computer graphics models commonly found on the Web (i.e., they may contain arbitrary degeneracies and alignments). We have experimented with them in a Web-based search engine that allows users to query for 3D models based on similarities to 3D sketches, 3D models, 2D sketches, and/or text keywords. We find our best shape matching methods provide better precision-recall performance than related approaches and are fast enough to return query results from a repository of 20,000 polygonal models in under a second. You can try them out at: http://shape.cs.princeton.edu.

Refreshments will be served at 11:15 a.m. in room 1302 Warren Weaver Hall.

*
* *
*
Thursday December 12th, 2002 3:00 p.m.
Room 109
WWH 251 Mercer Street
New York, NY 10012-1185

Demetri Terzopoulos Lectures on "Artificial Life"

The CIMS holiday party will follow at 4:30 p.m. that afternoon in the 13th floor lounge of Warren Weaver Hall.

*
* *
*
Tuesday, December 17, 3pm
719 Broadway, 12th Floor Small Conference Room

PerAct: An Action Perception System for the Talking Heads Experiment
Jean-Christophe Baillie
l'Ecole Nationale Superieure de Techniques Avancees
Paris, France

Abstract: PerAct is a fully integrated system that performs real-time action recognition in video sequences. The actions recognized are simple like "Take", "Push", "Pull". The vision part of the system uses probabilistic histograms to learn and track colored objects. Qualitative Descriptors are then used to recognize simple dynamic states between the objects (move, get closer, touch, ...). The Qualitative Descriptors are combined in real time into more abstract structures until actions can be recognized. The actions are used as a high level information source for the latest Talking Head experiment leaded by Luc Steels at Sony CSL. This experiment uses verbal interaction between robots looking at a scene to let them develop dynamicaly their own grammatic structures.

Jean-Christophe Baillie will also present his future research program at ENSTA which includes "Architecture for Active Vision" and "Image Synthesis".

*
* *
*
Courant Institute / NYU School of Medicine /
NYU College of Dentistry
December 18, 2002
2:50 Terzopoulos, 3:10 Zorin
Room 1302, Warren Weaver Hall ( 251 Mercer Street )

Demetri Terzopoulos and Denis Zorin will deliver lectures on Computer Graphics and Vision for Biomedical Applications.

*
* *
*
Friday November 1, 2002 11:30 a.m.
Room 1302
WWH 251 Mercer Street
New York, NY 10012-1185

Speaker: Tomaso Poggio, Center for Biological and Computational Learning, Artificial Intelligence Laboratory and McGovern Institute for Brain Research, M.I.T.

Title: Statistical Learning: Overview and Applications

Abstract:
I will give a brief overview of our recent work on statistical learning theory, including results on the problem of classification and function approximation. I will describe applications in various domains -- such as visual recognition, computer graphics and bioinformatics.

Some relevant papers (the papers can be downloaded from http://www.ai.mit.edu/projects/cbcl/publications/all-year.html

Refreshments will be served at 11:15 a.m. in room 1302 Warren Weaver Hall.
*
* *
*

Monday November 4th, 2002
2:00 p.m.
The Center for Advanced Technology
715 Broadway 12th floor
Small Conference Room

*
* *
*

Thursday, November 14th, 2002
5:00 p.m.
Irving H. Jurow Lecture Hall
100 Washington Square East

Ken Perlin, Professor of Computer Science
at the
Courant Institute of Mathematical Sciences
and Faculty of Arts and Science
will lecture on,
More Than Words Can Say: New Modes of Communication for Networked Citizens
Reception to follow

*
* *
*

Thursday, November 21st, 2002
3:00 p.m.
719 Broadway, 12th Floor Small Conference Room

Synthesizing Believable Facial Motion
Erika Chuang, Stanford University

Animation of facial speech and expressions has experienced increased attention in the graphics community recently. Most current research focuses on techniques for capturing, synthesizing and retargeting facial motion. Little attention has been paid to the problem of controling and modifying the expression itself.

In this talk, I will describe a technique based on factorization model that separates video data of expressive facial speech into expressive features and underlying speech content. This allows, for example, a sequence originally recorded with a happy expression to be modified so that the speaker appears to be speaking with an angry or or neutral expression. Although the expression has been modified, the new sequences maintain the same visual speech content as the original sequence. I will also discuss the limitation of this model, which is the lack of temporal coherency. Finally I will draw an analogy to recent work on texture synthesis as a motivation for future directions.

*
* *
*

Thursday, November 21st, 2002
7:00 p.m.
Tisch Hall
Room 200

John SanGiovanni, Microsoft Technical Evangelist, will be discussing current wireless technologies and overviewing several Microsoft mobile platforms, including Tablet PC, Pocket PC and SmartPhone. John will be followed by Brian Schneider from Microsoft College Recruiting who will be talking about technical full-time and internship opportunities working on products as developers, testers, and program managers.

There will be time for Q & A for both John and Brian after the presentation.

*
* *
*

Friday November 22nd, 2002
3:00 p.m.
719 Broadway 12th Floor
Small Conference Room

Animation by Example
Michael Gleicher,
University of Wisconsin - Madison

Motion for computer animation is notoriously difficult to create. In order to achieve the expressiveness, subtlety and realism of quality motion, practitioners have relied on either capturing the movements of real performers, or labor and skill intensive manual specification methods. Such methods create specific, short clips of motion. These clips may provide the desired quality, but lack the flexibility required when all movements cannot be pre-planned. In contrast to clip-based methods, motion synthesis approaches can flexibly create motions on the fly, but (to date) have not provided sufficient quality.

In this talk, I will survey our efforts to create high-quality motion for animation in a flexible manner. I will begin by reviewing some of our previous efforts in motion editing, the problem of adapting motions to meet new needs. I will discuss how the successes and failures of these approaches have lead us to a number of new directions. I will describe several of our recent results, including preserving the fine details of motions during editing, creating high-level control abstractions for motion, and synthesizing new motions by assembling pieces of existing motions. Combined, these developments promise to allow flexible creation of high-quality motion based on an initial set of example motions.

BIO: Michael Gleicher is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin, Madison. Prof. Gleicher joined the University in 1998 to start a computer graphics group within the department. The overall goal of his research is to create tools that make it easier to create pictures, video, animation, and virtual environments; and to make these visual artifacts more interesting, entertaining, and informative. His current focus is on tools for character animation and for the automatic production of video.

Prior to joining the university, Prof. Gleicher was a researcher at The Autodesk Vision Technology Center and at Apple Computer's Advanced Technology Group. He earned his Ph. D. in Computer Science from Carnegie Mellon University, and holds a B.S.E. in Electrical Engineering from Duke University.

*
* *
*
Thursday October 31st, 2002
10:00 a.m.
The Tribeca Grand Hotel
Two Avenue of the Americas
New York, NY 10013

Ken Perlin is giving the Keynote Speech and Opening Remarks on the second day of the Entertainment Technology Alliance's Tribeca Seminar Series.

*
* *
*
Email
For additional information, contact: info@cat.nyu.edu


*
*
*
*
*
*
*
*
*
*
*
*
*
December
*
*
November
*
*
October
*
*
*
*
*
Nystar
*
NYU CAT