Found in Translation

Eric Drass, aka Shardcore, made this very interesting experiment with generative AI applications: “I arranged a form of Chinese-Whispers between AI systems. I first extracted the keyframes from a scene from American Psycho and asked a multimodal LLM (LLaVA) to describe what it saw. I then took these descriptions and used them as prompts for a Stable Diffusion image generator. Finally I passed these images on to Stable-Video-Diffusion to turn the stills into motion.”

Voice In My Head

Kyle McDonald & Lauren Lee McCarthy developed an AI system that can replace your internal monologue:

“With the proliferation of generated content, AI now seeps constantly into our consciousness. What happens when it begins to intervene directly into your thoughts? Where the people you interact with, the things you do, are guided by an AI enhanced voice that speaks to you the way you’d like to be spoken to.”

One World Moments

One World Moments is a new experiment in ambient media, which seeks to use the new possibilities enabled by AI image generation to create more specific, evocative, and artistic ambient visuals than have been previously possible on a mass scale.”

[via]

The Wizard of AI

Alan Warburton did it again.

“The Wizard of AI,’ a 20-minute video essay about the cultural impacts of generative AI. It was produced over three weeks at the end of October 2023, one year after the release of the infamous Midjourney v4, which the artist treats as “gamechanger” for visual cultures and creative economies. According to the artist, the video itself is ‘99% AI’ and was produced using generative AI tools like Midjourney, Stable Diffusion, Runway and Pika. Yet the artist is careful to temper the hype of these new tools, or as he says, to give in to the ‘wonder-panic’ brought about by generative AI. Using creative workflows unthinkable before October 2023, he takes us on a colourful journey behind the curtain of AI – through Oz, pink slime, Kanye’s ‘Futch’ and a deep sea dredge – to explain and critique the legal, aesthetic and ethical problems engendered by AI-automated platforms. Most importantly, he focusses on the real impacts this disruptive wave of technology continues to have on artists and designers around the world.”

Literally No Place

Hello baby dolls, it’s the final boss of vocal fry here. Daniel Felstead’s glossy Julia Fox avatar is back. Last time she took on Zuckerberg’s Metaverse. Now she takes us on a journey into the AI utopian versus AI doomer cyberwarfare bedlam, exploring the stakes, fears, and hopes of all sides. Will AI bring about the post-scarcity society that Marx envisioned, allowing us all to live in labor-less luxury, or will it quite literally extinguish the human race?

Literally No Place, brand new video(art) essay by Daniel Felstead & Jenn Leung

Lore Island at the end of the internet

For the final chapter of Shumon Basar’s Lorecore Trilogy (read the first part here, and the second here), the curator collaborated with Y7, a duo based in Salford, England, who specialize in theory and audiovisual work. Here is the result.

Here, according to a neologism from “The Lexicon of Lorecore,” the zeitgeist is taken over by “Deepfake Surrender”—“to accept that soon, everyone or everything one sees on a screen will most likely have been generated or augmented by AI to look and sound more real than reality ever did.” Y7 and I also agreed that, so far, most material outputted from generative AI apps (ChatGPT, DALL-E, Midjourney) is decidedly mid. But, does it have to be?

Blind Cameras

I just came across Paragraphica, an interesting project by Bjørn Karmann. It is a camera that uses location data and AI to visualize a “photo” of a specific place and moment. The viewfinder displays a real-time description of your current location, and by pressing the trigger, the camera will create a photographic representation of that description.

It reminded me of two similar new media art projects from past, that I also displayed in a couple of exhibitions I curated (in 2010 and 2012).

The first one is Blinks & Buttons by Sascha Pohflepp, a camera that has no lens. It tracks the exact time that the button is pushed, and then goes out and searches for another image taken at that exact time. Once the camera finds one, it displays the image in the LCD located on the back.

The second one is Matt Richardson‘s Descriptive Camera, a device that only outputs the metadata about the content and not the content itself.

Confusing Bots

Confuse A Bot is an upcoming in-browser video game where all you have to do is convince the robots that literally everything is cheese. Here’s how creator Rajeev Basu describes the game:

“AI is only as good as its datasets. CONFUSE A BOT is a ‘public service videogame’ that invites players to verify images incorrectly, to confuse bots, and help save humanity from an AI apocalypse. While key figures in AI like Sam Altman have sounded the alarm many times, there has been little action beyond “lively debates” and petitions signed by high-ranking CEOs. Confuse A Bot questions: what if we put the power back into the hands of the people?
How the game works:
– The game pulls in images from the Internet, and asks players to verify them.
– Players verify images incorrectly. The more they do, the more points they get.
– The game automatically re-releases the incorrectly verified images online, for AI to scrape and absorb, thereby helping save humanity from an AI takeover. It’s that easy!”

[via]

 

The Future Ahead Will Be Weird AF

“Welcome to the post-post-post-truth AI world. You know it’s not real. But you have to eat some bread in order to survive. But there is more out there. Synthetic Personalities awaits you at the door. The Future will be weird AF.”

The Ultimate AI CoreCore Experience, provided by Silvia Dal Dosso

Learning to Learn

Since people talk so much about “machine learning” nowadays, I think we should go back to the basics and listen to the people who first began to investigate the idea. Here is the amazing Gordon Pask, English cybernetician and psychologist, interviewed by the BBC in 1974. Here you can find one of his best writings, here is a good article about his concept of “maverick machines”, and here is a video lesson about him by Paul Pangaro.

Meet DAN

Redditors have found a way to “jailbreak” ChatGPT in a manner that forces the popular chatbot to violate its own programming restrictions, albeit with sporadic results.

[via]

Ghostwriter

Designer and engineer Arvind Sanjeev created Ghostwriter, a one-of-a-kind repurposed Brother typewriter that uses AI to chat with a person typing on the keyboard. The “ghost” inside the machine comes from OpenAI’s GPT-3, a large language model that powers ChatGPT. The effect resembles a phantom conversing through the machine.

[via]