Chicago, Ill. - SIGGRAPH 2010's Emerging Technologies presents innovations across a broad range of applications, including displays, robotics, input interfaces, vision technologies, and interactive techniques. Presented in a combination of technologies chosen by the organizers and works selected by a jury of experts, the 22 selections came from more than 107 international submissions and will be on display and available for interaction with attendees in Los Angeles this summer.
"With every passing year, the technologies presented at SIGGRAPH become more and more astonishing," says Preston J. Smith, SIGGRAPH 2010 Emerging Technologies chair from Laureate Institute for Brain Research. "This year is no different as conference attendees will experience first-hand the latest achievements across science, commercial, and research fields. In some instances, these technologies are making their first public appearance and are coming to SIGGRAPH directly from research labs."
The following projects will be shown in the SIGGRAPH 2010 Emerging Technologies area:
360-Degree Autostereoscopic Display
Volumetric 3D display has been a motif in many science-fiction movies, and is the very image of futuristic technology. This prototype 360-degree autostereoscopic display allows views of full-color volumetric objects from all angles, as if the objects really exist. It uses special LED light sources to show 360 unique images to all directions in one-degree separations. Viewers can sense the depth of the displayed object because their left and right eyes are seeing different images. No special 3D glasses are needed to see the 3D image.
The 360-degree display has a digital-video input port for connection to computers or other devices. When video data is supplied to the display, moving volumetric objects appear inside the cylinder. When 360-degree CG movies are generated by a graphic processor in real time and supplied to the display, the user can move and interact with the volumetric object. The display is also equipped with a gesture sensor that can interactively control the orientation of the object in response to the user's hand motions.
This system is the first volumetric 3D display device that features a high-quality 3D image (360 view points), 24-bit full color, a compact size, and interactive live motion with digital video interface. It has many potential applications, such as amusement, professional visualization, digital signage, museum display, video games, and futuristic 3D telecommunication.
Contributor
Hiroki Kikuchi
Katsuhisa Itou
Hisao Sakurai
Izushi Kobayashi
Hiroaki Yasunaga
Kazutatsu Tokuyama
Hirotaka Ishikawa
Hidenori Mori
Kengo Hayasaka
Hiroyuki Yanagisawa
Sony Corporation
3D Multitouch: When Tactile Tables Meet Immersive Visualisation Technologies
This demonstration merges intuitive 2D collaboration with 3D display techniques such as viewpoint tracking and stereoscopic rendering. While efficient hardware systems and software algorithms have been well-identified for each technology, combining them raises totally new issues. For instance, each stereo viewing angle is unique, stereo rendering is usually single-viewpoint only, and the focal plane and stereo parameters must be precisely controlled to avoid collision between fingers and virtual objects.
In 3D Multitouch's application to city planning, two users share interactions in the 3D city, but each has a unique view of the content, just like they would on a real mockup. Additionally, the system controls stereo parallax by detecting the users' hands and fingers, which allows the most immersive negative parallax when no arm occludes the content. The system switches to positive parallax when users are closer.
This unique two-user, stereoscopic multitouch system focuses on the issues and constraints that will most certainly motivate such research work in the future. The goal of the demonstration is to let attendees experience the system and the new problems it introduces, and discuss its preliminary solutions.
Contributor
Jean-Baptiste de la Rivière
Cédric Kervégant
Nicolas Dittlo
Mathieu Courtois
Emmanuel Orvain
Immersion SAS
A Fluid-Suspension, Electromagnetically Driven Eye With Video Capability for Animatronic Applications
This compact, fluid-suspension, electromagnetically gimbaled animatronic eye features low operating power, a range of motion and saccade speeds that can exceed those of the human eye, and an absence of frictional wear points. The design has no external moving parts, so it is easy to install in new and retrofit animatronic applications. It allows a clear view through the entire structure from front to back, making a rear, stationary video camera possible. The camera view is supported without a large entrance pupil and is stationary even during rotation of the eye. Two of these devices can support stereo viewing while sharing the same electrical drive signal for objects at infinity. Alternatively, the eyes may be “toed-in” by offset drive signals derived from object-distance data.
The eye is comprised of a transparent plastic inner sphere painted to look like the human eye, a clear index-matching liquid, and an outer transparent shell. The pupil area of the inner eye is clear to allow light to enter, and an area is left open at the back for light to reach the CCD of the attached camera. The inner eye is neutrally buoyant in the liquid, and because of spherical symmetry, even with the moving inner eye, the assembly forms a single spherical lens, and is the only lens used for the camera.
It is important to note that the outer surface of the eye does not move. The inner eye is magnified by the outer sphere and liquid so its surface appears to be at the outside of the outer sphere.
In a special application, the eye can be separated into a hermetically sealable portion that might be used as a human eye prosthesis, along with an extra-cranially mounted magnetic drive.
Contributor
Lanny Smoot
Disney Reearch
Katie Bassett
Yale University
Marcus Hammond
Stanford University
Acroban the Humanoid: Playful and Compliant Physical Child-Robot Interaction
Acroban is the first humanoid robot that can demonstrate playful, compliant, and intuitive physical interaction with children and, at the same time, move and walk dynamically while keeping its equilibrium even if unpredicted physical interactions are initiated by humans.
This breakthrough was achieved by combining three crucial features:
1. Softness. The rigidity and elasticity of all articulations is controlled dynamically depending on external forces applied to the robot.
2. Morphology. The robot has a complex vertebral column and hips and ankles that allow it to keep its equilibrium through a large variety of external perturbations.
3. Motor and interactive primitives. Dynamical systems with stable and naturally drivable attractor dynamics, and a particular movement design that creates a strong illusion of life.
In this demonstration, the robot combines a range of behaviors that all react intuitively, naturally, and creatively to uncontrolled external human intervention. For example, when the robot is walking, a human can take its arms, like we take the arms of babies learning to walk, and drive the robot in any direction in a fluid and transparent manner. This is realized automatically, without providing the robot with any sort of command, and is the result of the dynamical properties of its motor primitives and morphological properties. Also, when the robot is not walking and displaying a complex movement of its torso, a human can physically interrupt the robot and take its arm, which will cause the robot’s arm to follow the human-imposed movements without falling if it changes its center of gravity.
Contributor
Olivier Ly
INRIA/LaBRI
Pierre-Yves Oudeyer
INRIA
AirTiles: Modular Devices to Create a Flexible Sensing Space
AirTiles is a novel modular device that allows users to create geometric shapes in the real world and add a flexible sensing space within the created shape. In this interactive audio/visual environment, users can freely manipulate and rotate the device and rotate it so that a geometrical shape appears on the floor.
The compact device consists of a microprocessor, a laser-emitting module, infrared-emitting/receiving components, a small position-sensitive detector, a wireless meshed network component, an LED, a beep speaker, and a battery. The location of each AirTile and its laser beam correspond to the corner and side of the created shape.
AirTile could be used in exercise routines, human-behavior measurement, and motion-guidance systems.
Contributor
Kazuki Iida
Junki Ikeuchi
Toshiaki Uchiyama
Kenji Suzuki
University of Tsukuba
An Interactive Zoetrope for Animation of Solid Figurines and Holographic Projections
Zoetropes, first developed in the early 1800s as parlor entertainment, inspired the method of successive image presentation used in modern cinema and television. Typically, they were spinning platters with “frames” of successive animation attached, and slits or mirrors to overlay images and make them intermittently to create animation.
In recent zoetropes, global, bright-light, LED strobes replace the slits, and 3D stereolithographic figurines replace the 2D images. These zoetropes strobe all figures simultaneously but are only capable of periodic, repetitive “shows”.
This project demonstrates new techniques for aperiodic, localized lighting to instantaneously vary the order in which images are displayed, so the course of animation can be changed in real time. This allows non-trivial, non-repetitive animation with a small number of frames. Others have explored audio synchronization of 2D animation, but this demonstration animates a character face based on audio input, which allows instantaneous interactivity with physical objects and holograms.
The demonstration includes three different systems:
1. Animation of whimsical faces drawn on ping-pong balls affixed to a small rotating platform, each face with increasingly open mouth positions and levels of face expression. A focused LED strobes a specific zoetrope location, and custom timing electronics selects the appropriate face to light at each platter revolution, based on spoken audio levels.
2. Rotation of a rear-illuminated hologram disk with image frames stored in it. An analogue of the circuitry in the first system strobes the most appropriate figure as the hologram rotates.
3. Replacement of hologram rotation with lighting-specific LEDs at appropriate angles around the rear-illuminated hologram. This technique produces a floating holographic head that “mimics” a human speaking into a microphone.
Contributor
Lanny Smoot
Disney Research
Katie Bassett
Yale University
Daniel Burman
Stephen Hart
Anthony Romrell
Holorad Inc.
beacon 2+: Networked Socio-Musical Interaction
In this environment for socio-musical interaction, people can collaborate to generate sounds and play music with their feet. The new musical interface (beacon) produces laser beams that generate sounds when they contact an individual performer's foot. Users can change the pitch and length of the sound by as they walk, dance, and step around the beacons to create and share a musical experience. Two beacons are connected via the internet, so other "performers" in a distant location can share the generated music simultaneously.
This novel interface could be used for physical exercise and other forms of recreation, and it provides a new mode of artistic expression for space designers.
Contributor
Takahiro Kamatani
Toshiaki Uchiyama
Kenji Suzuki
University of Tsukuba
Beyond the Surface: Supporting 3D Interactions for Tabletop Systems
Current tabletop systems are designed to sense 2D interactions on the tabletop surface, such as finger touches and tangible objects. Detection of activity above the tabletop surface would support 3D interactions. For example, an architect could examine a 2D blueprint of a building shown on the tabletop display while inspecting 3D views of the building by moving a mobile display above the tabletop.
This project demonstrates a new 3D tabletop system that combines an infrared (IR) projector with a regular color projector to simultaneously project visible content with invisible markers. Embedded IR cameras localize objects above the tabletop surface, and programmable marker patterns refine object location.
The demonstration shows three interaction metaphors. iView is a tablet computer with an attached IR camera, which becomes an intuitive tool to view 3D content from different perspectives. iLamp is a projector with an IR camera that projects high-resolution content on the surface, mimicking a desk lamp. iFlashlight is a mobile version of iLamp that facilitates information exploration and cooperative tasks.
Contributor
Liwei Chan
Hsiang-Tao Wu
Hui-Shan Kao
Home-Ru Lin
Ju-Chun Ko
Mike Y. Chen
Jane Hsu
Yi-Ping Hung
National Taiwan University
Colorful Touch Palette
Painting provides a rich tactile sensation, which we have gradually forgotten. This novel interactive painting interface may help us rediscover our creativity. Users can touch the display panel's electrodes, select or blend tactile textures of their choice, draw a line, paint, and experience the tactile sensations of painting. Various tactile textures can be created by mixing textures as paints.
Colorful Touch Palette is based on three innovations:
1. Providing various types of tactile sensation. Previous electro-tactile stimulation systems delivered only uniform textures and could not provide grating convex patterns with a resolution higher than the electrode interval. Colorful Touch Palette delivers various degrees of roughness by controlling the intensity of each electrode. Also, it virtually increases the spatial resolution by changing the stimulus points at a faster rate than the fingertip movements.
2. Using a blending method to create new tactile textures. A pressure model and a vibration model are combined to calculate the stimuli of the blended tactile textures.
3. Providing tactile feedback according to the velocity and posture of the finger.
With this system, users can blend and create various textures, draw the textures on the canvas, and touch and feel the tactile sensation. The interface could be used to design complex spatial, tactile patterns for surface prototyping. It could also support innovations in artistic tactile painting.
Contributor
Yuki Hirobe
Shinobu Kuroki
Katsunari Sato
Takumi Yoshida
The University of Tokyo
Kouta Minamizawa
Susumu Tachi
Keio University
FuSA2 Touch Display
Touching, stroking, and pulling are important ways to communicate with fibratus material. Above all, stroking is one of the most distinctive ways to interact with fibratus material, because stroking allows users to feel its direction, hardness, and thickness. FuSA2 Touch Display delivers those tactile sensations plus visual feedback. The visual display and multi-touch input detection technique are integrated into a system that uses plastic optical fiber (POF) bundles and a camera image, without additional sensors.
The system projects images on the surface, detects multi-touch input with projected light, and projects images to the projector-side surface of the POF bundles. The projected light emerges from the surface. When users touch the surface, the light is reflected diffusely and enters the POFs on the camera-side surface. The reflected light then emerges from POFs that correspond to the touched area. The camera captures this light from the camera-side surface, and the system recognizes the touch input.
The system is simple and delightful. When users touch or stroke the fibratus display, the touched areas change color. The colored area follows the stroke and fades away in time. Users receive tactile feedback from the fibratus material and visual feedback based on stroking speed and the touched area.
Contributor
Kosuke Nakajima
Yuichi Itoh
Ai Yoshida
Kazuki Takashima
Yoshifumi Kitamura
Fumio Kishino
Osaka University Graduate School of Information Science and Technology
Gesture-World Technology
Because fingers are articulated structures, they can assume many complex shapes, which often result in the problem of self-occlusion. Despite their small size relative to the rest of the body, fingers are also capable of moving in a wide 3D space. For these reasons, it has not been easy to estimate hand poses by non-contact means, with a monocular camera or a pair of cameras at close-range. In recent years, however, high-speed cameras have become more compact and inexpensive.
Gesture-World Technology focuses on achieving highly accurate hand-pose estimation for unspecified users. It constructs an enormous database including bone thickness and length, joint range of motion, and habitual finger movements, by thoroughly reducing the dimensionality of the image features in the dataset used for comparison with the input hand images. If the image features that express each hand pose were of extremely low dimensionality, it would be possible to prepare a database that includes differences among people. This system reduces the dimensionality to 64 or less, or 1/25th of the original image features.
If Gesture-World Technology can achieve fast and accurate 3D hand-pose estimation, using only camera images (in other words, without sensors) and without the need to strictly fix the camera position, this technology could be applied in a wide range of areas (for example, gesture-based computer operation, virtual games, remote control without a remote controller, digital archiving of artisan skills, and remote robot control). The need to attach sensors or find and use special controllers will disappear.
Contributor
Kiyoshi Hoshino
Motomasa Tomida
Takanobu Tanimoto
University of Tsukuba
Haptic Canvas: Dilatant Fluid-Based Haptic Interaction
Haptic Canvas is a new haptic interaction that enables users to blend, draw, and feel fascinating and mysterious haptic sensations in a shallow pool of dilatant fluid (water and strarch). The distinct haptic sensation comes from the fluid's "dilatancy", the change in state from liquid-like to solid-like according to the external force.
The system presents both direct touch and variable haptic sensations with dilatant fluid. A haptic glove mechanically controls the dilatancy. The glove's sucking, ejecting, and filtering functions jam the particles and cause changes in the state of the dilatant fluid. Users perceive haptic sensations as the shear force between the precipitated particles at the bottom of the pool and partially jammed particles is presented during hand movement.
Haptic Canvas also presents "stickiness", "hardness", and "roughness" sensations, the "haptic primary colors", according to activating parameters such as the sucking or ejecting pressure and its duration. A new haptic sensation can be created when haptic primary colors are synthesized at varying rates. Users can blend haptic primary colors to create a fascinating or mysterious sensation by touching virtual haptic paints, then drawing and coloring a haptic picture on the canvas, like painting a picture.
Haptic Canvas demonstrates that a dilatant-fluid-based haptic device expands the possibilities of haptic entertainment.
Contributor
Shunsuke Yoshimoto
Yuki Hamada
Takahiro Tokui
Tetsuya Suetake
Masataka Imura
Yoshihiro Kuroda
Osamu Oshiro
Bioimaging Laboratory, Osaka University
Head-Mounted Photometric Stereo for Performance Capture
Head-mounted cameras are important tools for capturing dynamic facial performances for video games and film. But it is still difficult to detect subtle facial motion, particularly around the eyes and mouth. This system enhances a head-mounted camera with LED-based photometric stereo. It provides dynamic surface-normal information that shows motions across the entire face. The resulting normals and geometry can be used directly or input as machine-learning algorithms to control arbitrary facial rigs.
Contributor
Andrew Jones
Graham Fyffe
Xueming Yu
Alex Ma
Jay Busch
Mark Bolas
Paul Debevec
University of Southern California, Institute for Creative Technologies
In-Air Typing Interface for Mobile Devices With Vibration Feedback
This vision-based 3D input interface for mobile devices does not require space on the surface of the device, other physical devices, or specific environments. Based on a camera with a wide-angle lens, it can operate in a wide 3D space.
The system achieves highly accurate detection of the 3D position of a fingertip. Three parameters are estimated to track the fingertip: translation along the plane perpendicular to the camera’s optical axis, rotation around the optical axis, and scale change. The Lucas-Kanade algorithm is used to estimate these parameters. The 3D position of the fingertip can be estimated because the scale change of the fingertip is inversely proportional to the distance between the finger and the camera.
A keystroke action is defined as a gesture in which the fingertip moves slightly as it taps at the touch panel in the direction of the camera’s optical axis. Because tactile feedback is important in typing quickly a vibrator attached to the back of the display vibrates briefly when users make keystroke actions.
With this system, users can type letters in the air.
Contributor
Takehiro Niikura
Yuki Hirobe
Alvaro Cassinelli
Yoshihiro Watanabe
Takashi Komuro
Masatoshi Ishikawa
Atsushi Matsutani
The University of Tokyo
Lumino: Tangible Building Blocks Based on Glass Fiber Bundles
These tangible building blocks allow users to assemble physical 3D structures on a tabletop computer. All luminos are tracked using the table’s built-in camera, including luminos located on top of other luminos. To enable this, each lumino contains a glass fiber bundle that allows the camera to “see through it”.
With this innovation, Ljmino extends the concept of fiducial markers commonly used with tabletop computers to the third dimension. Yet it preserves many of the benefits of regular tabletop markers: luminos are unpowered, self-contained objects that require no calibration, so it's easy to maintain a large number of them.
The demonstration shows three different hands-on interactions, so attendees can grasp the tangible nature of luminos and understand the specific mechanics and optics behind them:
1. The Touch-Up demo allows users to browse and touch up digital photos. A stack of luminos functions as a “time machine” that allows users to look through an image collection. Attendees can then retouch selected images using lumino “multi-dials” that control several image parameters at once.
2. The Construction Kit demo allows attendees to slip into the role of a (very simple) architect, while the table takes on the role of a "civil engineer". Attendees can try out different 3D constructions as the table tracks what is being constructed and displays piece lists, running totals of construction cost, and summaries of the design session that can be shared with others.
3. Prototyping materials, such as plastic fibers, aluminum blocks, and tools, so attendees can make their own luminos.
Contributor
Patrick Baudisch
Torsten Becker
Rudeck Frederik
Hasso Plattner Institute
Matrix LED Unit With Pattern Drawing and Extensive Connection
With this matrix LED system for pattern display and interaction, users can draw patterns with a light source such as a laser pointer. LED arrays display the patterns and sense the light. Each unit has a channel for communicating with neighboring units, which enables the system to extend the larger display areas by connecting the units as desired. The drawn pattern is morphed by user interactions, which are enabled by a tilt sensor in each unit. The pattern morphing is also performed by scrolling patterns across the connected units, or so-called "life game" pattern transition.
Contributor
Junichi Akita
Kanazawa University
Meta Cookie
Meta Cookie is the world's first pseudo-gustation system that induces cross-modal effects so humans can perceive various tastes by changing only visual and olfactory information. The system allows users to feel that they are eating a flavored cookie even though they are eating a plain cookie with an AR marker.
Gustatory information has rarely been studied in relation to computers, even though many studies have explored visual, auditory, haptic, and olfactory sensations. This scarcity of research on gustatory information exists because:
• Gustatory sensation is based on chemical signals, whose functions are still not fully understood.
• Perception of gustatory sensation is affected by other factors, such as vision, olfaction, thermal sensation, and memories.
This complex cognition mechanism for gustatory sensation makes it difficult to create a gustatory display. Meta Cookie combines augmented-reality technology and olfactory-display technology. Merging these two technologies creates a revolutionary interactive gustatory display that reveals a new horizon for computer-human interaction.
Contributor
Takuji Narumi
Takashi Kajinami
Tomohiro Tanikawa
Michitaka Hirose
The University of Tokyo
QuintPixel: Multi-Primary Color Display Systems
Multi-Primary Color (MPC) display systems employ one or more sub-pixels in addition to red, green, and blue (RGB). QuintPixel efficiently reproduces more than 99% of the colors in Pointer’s dataset, which consists of the existing colors in the world except those displayed by self-luminous objects. Because it includes yellow and cyan sub-pixels, QuintPixel can reproduce the colors of sunflower yellow, the golden mask of Tutankhamen’s mummy, the emerald green sea, pigment colors, etc. – colors that are beyond the color gamut in conventional display devices based on RGB.
Though QuintPixel adds sub-pixels, it does not enlarge the overall pixel area. By decreasing the area by one sub-pixel, it balances high-luminance reproduction with real-surface color reproduction.
In addition to their advanced color-reproduction capability, MPC display systems have another advantage over RGB systems: color-reproduction redundancy. Currently, most input signals are still limited to the three primary colors, so they can display only one RGB combination. MPC systems can display more color combinations. Exploiting this redundancy advantage, QuintPixel introduces further benefits and applications. For example:
• Pseudo-super resolution. Because MPC systems have more sub-pixels than RGB systems, they deliver enhanced perceptual resolution in display devices.
• Rendering improvement at different viewing angles. One of the major issues in liquid-crystal displays (LCD), viewing-angle dependency, can be solved by choosing a combination of MPC primaries to reproduce a given color to deliver the smallest perceptual difference in various viewing angles.
Contributor
Kazunari Tomizawa
Akiko Yoshida
Kohzoh Nakamura
Yasuhiro Yoshida
SHARP Corporation
RePro3D: Full-Parallax 3D Display Using Retro-Reflective Projection Technology
RePro3D is a full-parallax 3D display system suitable for interactive 3D applications. The approach is based on a retro-reflective projection technology in which several images from a projector array are displayed on a retro-reflective screen. When viewers look at the screen through a half mirror, they see a 3D image, without glasses.
The screen shape depends on the application, and image correction to compensate for screen shape is not required. So the system works as a touch-sensitive soft screen, a complexly curved screen, or a screen with an automatically moving surface.
RePro3D has a sensor function to recognize user input, so it can support some interactive features, such as manipulation of 3D objects. To display smooth motion parallax, the system uses a high-density array of projection lenses in a matrix on a high-luminance LCD.
The array is integrated with an LCD, a half mirror, and a retro-reflector as a screen. An infrared camera senses user input. Because the retro-reflector causes an intense reflection of the rays, a clear distinction is possible between the screen and other objects such as the user’s hands.
The current prototype of RePro3D displays distortionless parallax images on a curved surface from 40 viewpoints. Users are able to intuitively manipulate 3D objects with hand motions.
Contributor
Takumi Yoshida
Sho Kamuro
The University of Tokyo
Kouta Minamizawa
Hideaki Nii
Susumu Tachi
Keio University
Shaboned Display: An Interactive Substantial Display Using Soap Bubbles
Mainly in the field of media art, many artists and designers have used soap bubbles floating randomly in air as interaction tools. This novel interactive substantial display controls the size and pattern of soap bubbles and uses them as pixels to display images.
Shaboned Display features three innovations:
1. The system can show images with bubbles arranged in a matrix in a plane. By controlling the volume and timing of airflow from underneath, it manipulates bubble size and shape. As the bubbles expand and contract, the display presents images such as characters or figures.
2. The system creates soap bubbles automatically. Even if air movement or users’ fingers disrupts some of the bubbles, the system rapidly remakes the film and shows the same images. Shaboned Display can also break the bubbles intentionally to display images of bursting events.
3. The display is interactive. Via electrodes on the soap bubbles' surfaces and the edge of the air vent, it detects blowout events by sensing the ohmic value of the circuit. The system can also detect users' hand gestures with image processing. These input data can be used for interactive applications.
Shaboned Display can work as an ambient information board, in which audiences observe digital information with analog phenomena such as air movement. In interactive applications, when a user bursts a bubble, adjacent bubbles also burst sequentially like ripples. These bursting events also generate audio feedback, so audiences can enjoy the full impact of unintentional phenomena.
Contributor
Shiho Hirayama
Yasuaki Kakehi
Keio University
Slow Display
Displays should not be limited to fast displays for video or static displays that mimic paper. This project introduces a high-resolution display that requires little energy but updates at very low frame rates.
A laser scanner activates monostable light-reactive materials and exploits the temporary persistence of these materials to provide programmable space-time resolution. The resolution of the display is limited by laser-scanner movements and laser-spot properties, but it is not dependent on the particle size of the light-sensitive material. Projection surfaces can consist of complex 3D materials, allowing objects to become low-energy, ubiquitous peripheral displays.
The opportunities for using remote activation of monostable or bistable materials to create novel displays are immense. Possible Slow Display applications include: light-activated displays, large high-resolution emissive displays, and low-power reflective outdoor displays. By mixing emissive and reflective materials, this project demonstrates displays that are visible both during the day and at night. When applied to modeling materials, such as paper and clay, the monostable materials provide persistent projected decals for children's toys and interactive physical/digital design applications. The slow display is equally applicable to everyday objects and environments because it allows users to skin their surroundings daily or hourly.
Contributor
Daniel Saakes
Ramesh Raskar
MIT Media Lab
Naoya Koizumi
Keio University
Touch Light Through the Leaves: A Tactile Display for Light and Shadow
You can feel something good when light falls through the trees into the upturned palms of your hands. With this visual-tactile display, users can sense that light and feel the transition between light and shadow.
Touch Light Through the Leaves consists of a camera and 85 vibration units. The camera detects light and shadow, and the vibration units, controlled via image processing and vibration motors, change those inputs into tactile sensations. The display is palm-sized, so it can be used anywhere under various conditions.
People who have experienced this display report weird, new sensations. In their daily lives, light and shadow are perfectly ordinary, but when they feel light and shadow directly on their palms, they are "touched" by light for the first time.
Contributor
Kunihiro Nishimura
Yasuhiro Suzuki
Michitaka Hirose
The University of Tokyo