In antiquity, we see examples of magnifying crystals formed into a biconvex shape as early as the 7th century BC. Whether the people of that period used them either for fire-starting purposes or vision is unclear. Still, it is famously said that Emperor Nero of Rome watched gladiator games through an emerald.
Needless to say, the views we get through modern lenses are a lot more realistic. So how did we get from simple magnifying systems to the complex lens systems we see today? We start with a quick journey through the history of the camera and the lens, and we’ll end up with the cutting edge in lens design for smartphone cameras and VR headsets.
Theory and Practice
Philosophers and scientists across most cultures and periods have thought about light. Our modern theories of light date back to the 1600s, and the work of scientists like Johannes Kepler, Willebrord Snellius, Issac Newton, and Christiaan Huygens. Of course, it wasn’t without controversy. Netwon and many others had put forward the idea that light was a particle that moved in a straight line like a ray, while Huygens and others proposed that light behaved more like a wave. For a while, Newton’s camp won out.
This changed in the 1800s when Thomas Young’s interference experiments showed data that no particle theory could explain. Fresnel, in 1821, managed to describe light not as a longitudinal wave but as a transverse wave. This became the de facto theory for light, known as the Huygens-Fresnel principle until Maxwell’s electromagnetic theory came along and ended the era of classical optics.
Meanwhile, practical eyeglasses were likely invented in central Italy around 1290. Eyeglasses spread throughout the world and spectacle makers also started making telescopes. The first patent for a telescope was filed in 1608 in the Netherlands. However, the patent application was not granted because by that time telescopes were already somewhat common. These refracting telescopes were quite popular and often simple two-element systems. Reflecting telescopes such as the one built by Newton in 1668 were built in part to prove his theories about chromatic aberration. Ultimately mostly correct, he proved that lenses refract light to a focal point but that different wavelengths refract differently. This means that colors have different focal points, distorting the image.
When film arrived on the scene, it was discovered that cameras suffered from spherical aberration as well – the lens could not focus the image over a wide flat plane. Charles Chevalier created an achromatic lens that could control both chromatic and spherical aberrations. However, this meant that the aperture at the front was quite small (f / 16), taking the exposure time to twenty or thirty minutes.
While not useful for cameras, the fresnel lens came around this time in 1818 and saved hundreds if not thousands of ships. The French Commission of Lighthouses had hired Fresnel to design the lens and it had worked out quite well. Perhaps because of this success, in 1840 the French government offered a prize for anyone who could come up with a lens that could reduce exposure times in cameras.
Joseph Petzval was a math professor that took on the challenge presented. Eight human artillery computers were lent to his projectby an archduke for six months – this was cutting edge design. Ultimately he wasn’t offered the prize since he wasn’t French, but his lens was the best performing out of those submitted that year.
Petzval’s lens was one of the first four-element lens systems and one of the first lenses designed specifically for the camera rather than being a repurposed camera obscura or telescope part. As a result, it was a prevalent lens design for the next century. While further tweaks were common, they were mostly done via trial and error rather than going back to the mathematical underpinnings that created the lens in the first place.
The next jump forward came in 1890 with the Zeiss Protar, which used new types of glass with different indexes of refraction and other optical properties. Combining different glasses together resulted in a lens that corrected almost all aberrations. This type of lens is known as an Anastigmat, and the Protar was the first.
There’s much more history here around the rise of the Japanese lens manufacturers and the fall of the German ones. But we’re going to skip ahead to the smartphone.
The Modern Smartphone
We discussed it briefly in our longer article that talks about what makes up a smartphone. But modern smartphone lenses are as complex as they’ve had to handle capturing adequate light while being small. An excellent resource is this blog post we linked to in the above article.
Many smartphones today still use a three-element lens system, heavily inspired by the Cooke triplet.
It has the advantage of being fairly easy to explain and relatively simple to manufacture. The first lens has high optical power and a low index of refraction and dispersion since we cannot correct for such aberrations. The second lens offsets any aberrations that do occur in the first and is a different material, helping to reduce the spherical effect that the first lens produces. The third lens corrects the distortion from the first two and flattens the rays onto the image plane.
Then we abruptly go to something like this. Look at the lenses. None of them are lovely spherical shapes. Instead, they’re strange and mysterious.
This is the lens stack up from around an iPhone 7 – it isn’t clear what patent was used in what phone. The front lens has high optical power, and the second lens tries to correct that. But then the last four lenses are all wonky shapes that correct for distortion and spherical aberration.
Unlike larger cameras, most of the lenses in a cell phone are the same material. Why? The simple answer is that they have to. Smartphone lenses are mostly plastic rather than lapped glass. Contrary to what you may think, making them in plastic is more complex the glass. Anyone who has worked with resin can tell you that getting defect-free clear plastic is no easy feat. The plastics we can use for lenses come in only two main varieties, with two indexes of refraction to choose from. Glass comes in a whole spectrum, doped with various materials to get exotic IoR and Abbe numbers. In fact, some of the more exotic camera lenses have radioactive materials such as thorium in them. However, plastics can better form unique shapes compared to glass. Lapping glass into anything other than a sphere is difficult to scale and manufacture consistently. Plastic is molded and can be in any form you want.
Additionally, there are plenty of other features that smartphones offer, such as optical image stabilization which uses MEMS to move the lens around in response to the motion. Of course, this requires moving one or more lenses or even the camera module itself, which introduces a host of problems as each lens has a specific role in handling aberrations. In the latest iPhone 12, the CMOS image sensor moves rather than the lenses. This allows the lenses to retain much of their optical power while still correcting for aberrations.
If photography drove lens innovation in the 1800s, it’s probably the cell phone driving it in the 2000s. But there’s one more nich application that might shake things up in the near future: VR. Currently, VR headsets are large and bulky. They feel this way partially because so much of their weight is away from your face, pulling down harder. If the headset could be thinner, that would make a more comfortable experience.
Right now, much of that bulk comes from the lenses and the distances needed to focus the image so that it looks correct when the headset is on. Recently, Facebook / Oculus / Meta showed off some of its prototype headsets, and a few tried to address this. Depending on where the user is looking, the headset does things like varying the focal plane and correcting for lens distortion in software on the fly.
The Future Of Lenses
Some are saying that we can get rid of lenses altogether. Several companies, such as Metalenz, are building waveguides out of silicon nanostructures. The advantage is that it can be packaged right on top of the CMOS image sensor without any complex housing. Because systems that used to use dozens of lenses to get the accuracy and low levels of distortion needed can be compressed to just a single layer, it would enable regular cameras and spectrometers to shrink.
Additionally, this is something that VR headsets are very interested in, as waveguides could be built into the screens allowing for a wider field of view with less weight and bulk. The future certainly holds a lot of exciting new developments for the design of lenses. As we get towards lenses that are distortion-free in more scenarios with more control, there are some going back to older lenses. Sometimes it is for the nostalgia and sometimes it’s because they like the look. Perhaps if Emporer Nero were to squint through our various lenses, cameras, and VR headsets today, he might still prefer the ruby, optical distortions and all.