We Live in Public


I just stumbled upon the story of Josh Harris. How did I miss this until now???

“Josh Harris is the founder of Jupiter Communications and Pseudo.com. The dot-com pioneer dreamed up legendary art project Quiet: We Live in Public, a late ’90s spycam experiment that placed more than 100 artists in a “human terrarium” under New York City, with webcams capturing their every move. It ended badly, and Harris’ personal life later took a dive when he tried a similarly intimate stunt in his own loft. ”

Here’s a documentary.

The Isolator

“The Isolator is a bizarre helmet invented in 1925 that encourages focus and concentration by rendering the wearers deaf, piping them full of oxygen, and limiting their vision to a tiny horizontal slit. The Isolator was invented by Hugo Gernsback, editor of Science and Invention magazine, member of “The American Physical Society,” and one of the pioneers of science fiction.”

(via laughing squid)

Adding Objects Into Photos

objects

Kevin Karsch and his team at the University of Illinois at Urbana-Champaign are developing a software system that lets users easily insert objects into photographs, complete with convincing lighting and perspective:

We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference.

(via Laughing Squid)

Film Grenade

Ball camera

The Throwable Panoramic Ball Camera, designed by Jonas Pfeil as part of his thesis project at the Technical University of Berlin, creates spherical panoramas after being thrown into the air:

“Our camera uses 36 fixed-focus 2 megapixel mobile phone camera modules. The camera modules are mounted in a robust, 3D-printed, ball-shaped enclosure that is padded with foam and handles just like a ball. Our camera contains an accelerometer which we use to measure launch acceleration. Integration lets us predict rise time to the highest point, where we trigger the exposure. After catching the ball camera, pictures are downloaded in seconds using USB and automatically shown in our spherical panoramic viewer. This lets users interactively explore a full representation of the captured environment.”

(via BLDGBLOG)

 

The future is arriving (at last)

Researchers at Berkeley have developed a system that reads people’s minds while they watch a video and then roughly reconstructs what they were watching from thousands of hours of YouTube videos.

Nishimoto and two other research team members served as subjects for the experiment, because the procedure requires volunteers to remain still inside the MRI scanner for hours at a time.

They watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or ‘voxels.’

‘We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,’ Nishimoto said.

The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.

Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.

Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.

(via kottke.org)

Off Book: Visual Culture Online

“For decades now, people have joined together online to communicate and collaborate around interesting imagery. In recent years, the pace and intensity of this activity has reached a fever pitch. With countless communities engaging in a constant exchange, building on each others’ work, and producing a prodigious flow of material, we may be experiencing the early stages of a new type of artistic and cultural collaboration. In this episode of Off Book, we’ll speak with a number of Internet experts and artists who’ll give us an introductory look into this intriguing new world.”

Featuring:

Chris Menning, Viral Trends Researcher, Buzzfeed
MemeFactory, Internet Researchers
Olivia Gulin, Visual Reporter, Know Your Meme
Ryder Ripps, Artist and Co-Creator, Dump.fm
John Kelly, PH.D., Founder and Chief Scientist, Morningside Analytics