MRL logo

Planar Manipulator Display

Using physical objects as bi-directional user interface elements

Dan Rosenfeld, Michael Zawadzki, Jeremi Sudol, Ken Perlin
NYU Media Research Lab

A user selecting a layout method on the Planar Manipulator Display

The display reconfigures objects on the table based on user input



Videos: Bird's eye view (3 MB mpeg) Group interaction (5 MB mpeg) System overview (25 MB mpeg)



Background
We have become so habituated to standard computer interfaces – a monitor, a mouse and a keyboard – that it is difficult to notice their shortcomings. Perceptual faculties like spatial awareness and physical intuition are not well served by the typical input/output methods that desktop computers offer. 3D geometry, for example, is more easily understood when real physical objects are used, rather than merely represented on a flat monitor screen.

We think that a more effective and pleasant means of human-computer interaction can be achieved by better employing these perceptual abilities. To this end, we have built the Planar Manipulator Display (PMD), a novel input/output device that can sense the movement of multiple physical objects on a table surface while simultaneously controlling their motion. Our work extends research on the use of physical objects as input elements by providing a practical method for the use of bi-directional (input-output) physical elements.


Implementation

We are particularly interested in scenarios involving many elements, where physical presentation could be significantly more comprehensible than a screen-based representation. For this reason, we require a design that can scale well to large numbers of objects. The architecture we developed incorporates the use of inexpensive, ‘dumb’ mobile platforms, high-speed sensing, and centralized computation. Since very little computation is performed on each platform, additional ones can be added to the display at a nominal expense. In addition to these small platforms, the system includes:
• an opto-electronic, position sensing subsystem,
• a ‘table controller’ to manage communication and position computation,
• a standard PC on which control systems and PMD applications run.

Position sensing and communications (from the table to the platforms) are run in a round-robin manner. Within each update cycle, each platform’s current location and orientation are determined, control systems are run, and new motor velocity commands are transmitted to platforms. The platform simply sets the speed of each motor as commanded.

Position Sensing
Each platform has two infrared LEDs that are pulsed sequentially during its time-slice in the update cycle. The LEDs’ outputs are imaged onto a position- sensing device (PSD) via a lens, and generate position-dependent currents. These currents are amplified and then sampled by the table controller, and used to compute a position for each LED.

system architecture

Applications
We have begun developing test applications to illuminate technical issues and to explore the kinds of interaction the device facilitates. One of these is a design support system for interior architecture. With this application, users can arrange and view furniture in an interior space according to preferred layout methods. Each user-selectable layout method defines a set of soft constraints, which are employed in computing a new configuration. The participant chooses among layout methods by moving a ‘selector puck’ to the desired menu area. If the user then moves a piece of furniture – in other words, setting it to a desired location in the space – the system holds the object in that position as a new configuration is computed. It then moves the remaining furniture into the new configurations to satisfy the constraints imposed by the selected layout method. For example, the participant might switch from the “Favor Light” option, in which furniture placement is biased towards to windows, to the “Away from Center” option, and see furniture moved to maximize the amount of contiguous open space.

top

NYU Center for Advanced Technology | Media Research Lab