Imaging at One Trillion Frames per Second

By Ramesh Raskar and Christopher Barsi

High-speed imaging is a long-standing goal in optics, with applications in spectral dynamics, motion analysis, and three-dimensional imaging. Currently, commercially available systems can offer sensors operating at one million frames per second at reduced spatial resolution.1 In the lab setting, serial time-encoding of 2D images have provided 100 ps shutter speeds.2 Recently, researchers at the MIT Media Lab, in the Camera Culture group, led by Professor Ramesh Raskar, have developed a camera system that has an effective time resolution of 2 ps, roughly one half of a trillion frames per second.3

Raskar’s system is centered on repurposing a well-known device called a streak sensor, which is capable of recording 2 ps time scales, and an ultrafast femtosecond Titanium: Sapphire laser. In this system, the laser illuminates a scene of interest, and the streak sensor records the scattered light. Unfortunately, on its own, the streak sensor has several drawbacks. First, with an effective exposure time of 2 ps, the signal-to-noise ratio (SNR) is incredibly low, and any scattered light would be buried in noise. Second, the streak sensor itself has a one-dimensional aperture, so that it can image only a single horizontal line of a scene. Third, given the time resolution, a mechanism must be in place to synchronize the laser with the detector.

Raskar’s team, in collaboration with Diego Gutierrez at the University of Zaragoza and Moungi Bawendi, has overcome all these limitations. The key is to rely on the periodic nature of the pulsed beam. Assuming a static object, the light scatted from the scene is identical for every illuminating pulse. So, the streak sensor can record many pulses, save the data, and numerically stitch together a high resolution movie.

The experimental setup is shown in Figure 1. A mode-locked Ti:Sapph laser emits 50 fs pulses at a repetition rate of 75 MHz. The beam is focused with a plano-convex lens onto a metallic diffuser, which therefore acts as a point light source, illuminating the entire scene with a spherical energy front. A portion of the beam is split and sent to a photodetector connected to the streak sensor. This path acts as a synchronizaton mechanism, which causes the streak sensor to “fire” at the appropriate instant. The streak sensor then records a 1D movie of the scene. These x-t space-time planes have a resolution of 672 x 512, with a time resolution of 2 ps/pixel. In order to capture the vertical dimension, a mirror scanning system rotates the camera’s viewline, and x-t streak images are recorded sequentially. Numerically, the streak images are stacked into a x-y-t data cube. The result is a movie with a frame rate of one half trillion frames per second.

With this system, light can be observed as it scatters through macroscopic scenes. In order to visualize these time-resolved movies, time-of-flight is encoded as different colors on static photos.4 Consider the image in Figure 2 (left). Here, the light propagates through a plastic bottle filled with milky water. Light enters the bottle from the left, propagates rightward, strongly scattering at the cap, and reflecting from the surface of the table. The entire propagation time is only 1 ns.

Interestingly, there are some surprising features in this photo. For example, the energy fronts in the bottle (near the label) are curved, when, in fact, the pulse energy front is planar. In addition, the reflections on the table seem to be propagating away from the sensor and towards the bottle. The team discovered that these measurements are due to the fact that the object and sensor are physically separated. In other words, these distortions are caused by relative propagation delay between the scene and the sensor. Using the known position of the scene, the movie can be warped and the effects can be removed, Figure 2 (right).

What are some applications of this time-resolved sensor? With this ultrafast bandwidth, the researchers have demonstrated the ability to reconstruct images of objects that are hidden from view around a corner.5 As shown in Figure 3, a toy mannequin is hidden by the streak sensor. The laser, instead of illuminating the object itself, illuminates a third wall, which scatters light in all directions. Some of those light rays reach the object, are scattered back to the wall, and ultimately to the camera. The streak sensor can record the arrival times of all the different ray paths, but the positions are mixed up. A numerical algorithm, similar to computed tomography methods, is used to “unmix” these rays to reconstruct the shape of the hidden object. This work was recently published in Nature Communications, and for this research, lead author Dr. Andreas Velten was awarded MIT’s prestigious TR35 award.

This camera system is useful because it has the potential to be used in imaging in the presence of scattering media. Applications include remote sensing techniques, endoscopic systems for wider fields of view, quality assurance in hazardous environments, and fundamental investigations into ultrafast events. The technique is full-field and does not require coherence, suggesting that high speed LEDs can be used. Overall, the method and results suggest new possibilities that can push the frontiers of imaging and scattering forward.

Figure 1: Experimental setup for imaging at a trillion frames per second.

img1

 

Figure 2: Visualizations of optical propagation through a macroscopic scene. Left: visualization of light scattering through a bottle. Right: same visualization with computational re-warping of time to account for object-to-sensor travel times.

img2

Figure 3: Using the femtocamera to look around corners. From left to right; object, setup, measured space-time data and reconstruction.

img3

1For example, see the Phatom v12.1 (http://www.visionresearch.com/Products/High-Speed-Cameras/v121/).

2K. Goda, K. K. Tsia, B. Jalali, Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena. Nature 458, 1145-1149 (2009).

3See the group’s home pages at http://femtocamera.info/ and http://web.media.mit.edu/~raskar/cornar/

4A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez and R. Raskar. Femto-photography : capturing and visualizing the propagation of light. ACM Trans. Graph. 32, 44 (2013).

5A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, R. Raskar. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nature Communications. 3, 745 (2012).