A former Google SVP recently praised Apple’s use of computational photography to achieve shallow depth of field shots with the dual camera on the iPhone 7 Plus. A research team has now developed a next-generation computational photography technique that allows you change the perspective of a photograph after it has been taken – including achieving effects not possible with conventional cameras …
The research paper was spotted by DPReview.
For example, there may be times when you want to take a photo of a person in front of an impressive building or other close-up structure. With conventional photography, you either need to get a long way back – which leaves the person a small element in the photo – or use a wide-angle lens, which distorts the person (in very unflattering ways!).
According to UCSB, computational zoom technology can, at times, allow for the creation of ‘novel image compositions’ that can’t be captured using a physical camera. One example is the generation of multi-perspective images featuring elements from photos taken using a telephoto lens and a wide-angle lens.
With the technique described here, a range of different photos can be combined in ways that effectively apply different focal lengths to different elements of the photo. For example, the building (or rollercoaster in the example shown below) can have a short focal length that shows the whole thing, while the person in front of it has a long focal length that leaves them looking as they do in real life.
The approach isn’t as simple as Portrait Mode, where the iPhone 7 Plus takes one photo with each lens and uses parallax to work out depth before applying an artificial blur to the background. Computational zoom requires you to take a number of separate photos from different places, but that’s not to say that a future iPhone equipped with more than two cameras couldn’t do the same thing with what amounts to a single shot.
Via The Verge