We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference.
“Our camera uses 36 fixed-focus 2 megapixel mobile phone camera modules. The camera modules are mounted in a robust, 3D-printed, ball-shaped enclosure that is padded with foam and handles just like a ball. Our camera contains an accelerometer which we use to measure launch acceleration. Integration lets us predict rise time to the highest point, where we trigger the exposure. After catching the ball camera, pictures are downloaded in seconds using USB and automatically shown in our spherical panoramic viewer. This lets users interactively explore a full representation of the captured environment.”
Commemorating the Civil War’s upcoming 150th anniversary, NPR is currently showcasing a nice collection of stereoview photographs from the war. All the photos are courtesy of the National Museum of American History.
“The device is something in between a Polaroid camera and a digital camera. The camera doesn’t store the pictures on film or digital medium, but prints a photo directly on a roll of cheap receipt paper while it is taking it. As this all happens very slow, people have to stay still for about three minutes until a full portrait photo is taken.”
“The Recollector is a 3D collage, created in the computer. Jasper de Beijer has used video game technology to create a virtual environment which is somewhere between a museum, a theatre and a photo archive. The visitor can freely walk around in it like in a physical space.”
“American Pixels’ series is a pixel experiment created by Jörg M. Colberg in (2009 – 2010).
‘Image formats like jpeg (or gif) use compression algorithms to save space, while trying to retain a large fraction of the original information. A computer that creates a jpeg does not know anything about the contents of the image: It does what it is told, in a uniform manner across the image.”
“…Each picture show the artist’s hand making a one-finger gesture, again rude, at a variety of places familiar and unfamiliar. The equal-opportunity dissing encompasses power sites like Tiananmen Square and the White House, but also, intriguingly, Long Island City, Queens. Together with the history-infused sculpture, the antic pictures give a sense of the versatility of an artist whose role has been the stimulating, mold-breaking one of scholar-clown.”
“…With sources originating from digital readymades or appropriated video, each artist modifies, redirects and redistributes the footage using a wide array of alterations, from simple editing to more detailed and complex reconstructions. The digital realm casts a dark shadow over the initial intent of images and our preconceptions of their meaning and usage into a new alternative mode of existence where the source becomes either a catalyst or an added layer of a whole new work.”
“When it is spoken about interactive or augmented photography then immediately one has in mind the representation of photos in digital format (on computer or phone screen, projection, etc) that are manipulated through software or any other code. Yes, the interactive pictures can react on our touch, voice, weather, or whatever. But those interactive photos are still just pixels.
My artwork – Augmented Photography – is not about pixels. It is about re-thinking printed photography. Current artwork is more than a framed picture – it has its behavior and it is able to react on observers.
I am adding liveliness to a doll on the picture through eye movements. If none is looking at the picture the doll’s eyes are closed. Only time-to-time, she is waking up and asking for attention. When the photograph is approached, the doll on the picture opens her eyes and starts to blink to a viewer or just stare on him/her for a while. Hence, the artwork has different behaviors that could be explored by observing the picture for a while.”
Japanese photographer Sohei Nishino walks around cities taking pictures and pasting and arranging the results to create layered icons of a city from his memory. He has mapped Istanbul, Hong Kong, Paris, New York, Shanghai, Tokyo, Hiroshima, Kyoto, Osaka and London.
‘The images are screenshots from Google Earth with basic color adjustments and cropping. I am collecting these new typologies as a means of conservation – as Google Earth improves its 3D models, its terrain, and its satellite imagery, these strange, surrealist depictions of our built environment and its relation to the natural landscape will disappear in favor of better illusionistic imagery. However, I think these strange mappings of the 2-dimensional and the 3-dimensional provide us with fabulous forms that are purely the result of algorithmic processes and not of human aesthetic decision making. They are artifacts worth preserving.’
“In the last few weeks Ishac Bertran has been making experiments in the area of “Generative Photography”. He describes the process where the digital drawings are sequentially projected on to a screen in a dark room and photographed using long exposure times.”
“Photography only lets you capture instants (even long exposures are only blurred instants). So, I hacked the idea of photography, mixing together many photos of the same scene into a single one, slicing and dicing the images and putting them back together, chronologically. I call the grammar behind it ‘chrono cubism.'”