Happy Thanksgiving! This week we dive deep into the uncanny valley, trying to find where a person ends and a bot begins, and how the distinction is perhaps becoming less and less meaningful. Read on to discover the inherent bot-ness of bots, the ways in which we’re already becoming cyborgs, and how to give gifts this holiday season that protect your loved ones’ privacy.
—Matt & Alexis
1: Stop trying to make machines be like people
Last week, Alexis published an essay on a topic she’s been exploring for a while: how we design interactions between humans and machine intelligence. She uses three archetypes — C3PO, Iron Man, and R2D2 — to describe different models for these interactions. C3PO is the model that’s most familiar, where we try to make machines act like humans, and she argues that it’s a fundamentally uninteresting approach that’s doomed to fail most of the time. Iron Man, or using machines to augment ourselves, is a somewhat more compelling model that lets us use computational capabilities to give ourselves superpowers. But where things get deeply weird, delightful, and full of possibility is when we look at R2D2, a model which suggests machine intelligence as a strange companion species that we can treat as a creative collaborator:
“Collaborating with machine intelligence means being able to leverage their particular, idiosyncratic way of seeing and incorporate it into creative processes… it lets us delight in the strangeness of that unfamiliar gaze, but also can help us see hidden patterns and truths in our human artifacts.”
Matt Webb wrote this post yesterday about Ben Hammersley’s new startup, and in it he articulates a perspective on collaborative AI I’ve been thinking about for a while, and I’m excited to see more…
2: Distortion and playing with imperfection
Upon reading Alexis’s piece, our friend Simone Rebaudengo reached out because he had been thinking along similar lines. Simone recently wrote this lovely piece with Nick Foster called “On Distortion”. In it, they compare our current approach to AI to the way musicians dealt with electric guitar amplification in the mid-20th century. A side effect of amplification was that it often resulted in distortion, making the guitar sound buzz, crackle, and pop. Distortion was initially seen as undesirable because the intended effect was perfect replication and amplification of the sound. But, as we know, guitarists in the 1950s and 1960s started playing with that distortion as a creative material, generating an aesthetic that shaped major musical movements. They posit that we’re in a similar phase with machine learning, where we’re striving for perfect simulation (the C3PO model) when it would be much more interesting to play with the imperfections as a creative canvas.
(Nick Foster, incidentally, is also the author of one of our all-time favorite pieces on futures design, “The Future Mundane”)
In 1929 a small guitar company by the name of Vega produced and launched a portable valve amplifier to pair with their line of banjos, releasing musicians from the large, static PA systems which…
3: Silicon Uncanny Valley
Speaking of simulating humans, this OneZero piece takes a deeper look at the importance of faces in robot design. Agility Robotics learned this lesson the hard way when it introduced headless robots that were meant to work alongside people in industrial and academic settings. The robots were person-shaped because they needed to navigate spaces that were designed for human bodies, but they didn’t really need heads to complete their tasks. However, the lack of a head freaked out their co-workers so much that it turned out to be more of an issue than anticipated. Heads, and the faces that usually accompany them, are very important to us as human beings, and one of the main ways we “read” the intentions of others. As a result, the design of robotic faces has been a topic of study and experimentation for years, from Baxter to the PR2. And it’s not just a matter of sticking a face on a robot, but also making sure that the facial expressions are aligned with the actual capabilities of the machine:
“A key task for roboticists is avoiding what McGinn calls ‘unbalanced design,’ where user expectations differ from the machine’s actual ability. This is why even though the presence of a mouth makes a robot seem friendlier, Cakmak would still caution designers against slapping one on, as the presence of a mouth that doesn’t communicate verbally would be confusing to users.”
The robot Digit stands approximately five feet, four inches high, with a metallic torso the teal color of a hospital worker’s scrubs. It can walk up and down staircases and around corners on two…
4: AR for your ears
We’ve been thinking about how audio can be used spatially for years, and particularly, how information can be conveyed ambiently in ways that feel supportive and not interruptive. Most experiments of this type have been guided walks of one form or another (including a project Alexis did ages ago during her MFA work) have been specific to a given context, like walking through a museum or other constrained space, but haven’t quite touched on the randomness of how people typically navigate cities. Along with that, AR experiences have shown a lot of promise, but have struggled with the problem of having to hold your arms out in front of you, almost using your device as a dowsing rod for locative information.
That’s why we were excited to read about MarsBot, an experiment from Foursquare Labs, that has spent a lot of time on the detailed interaction design around a locative audio experience. It would be super easy to build a simple audio layer on top of Foursquare’s Pilgrim SDK that would read tips about businesses you pass, but it would likely be far too much chatter to be useful. Foursquare’s Labs team spent a lot of time thinking about the interactions, being explicit about how and when it was on and tracking the user’s location, limiting the number of interruptions to just those most important, keeping the interruptions as short as possible (10 or fewer syllables!). and how those interruptions would affect the other parts of your listening routine, like pausing a podcast or ducking the volume of your songs.
With so many city dwellers wearing headphones and ear buds so often, it’s likely we’ll see a lot more experimentation around these interactions coming soon (or rather, once more people can be out and about.) We hope that any future experiment or shipping product that uses locative audio does as much to consider being polite and supportive as this project did.
Today we’ve got a new experiment for you to play with: Marsbot for AirPods, a lightweight virtual assistant that proactively whispers local recommendations (and other fun snippets) into your headphones or earbuds as you’re walking around.
5: Cyborgs aren't what we thought
We took a long look at this piece in The Guardian about Apple’s ambitions for augmented reality glasses, and while there’s a lot of the usual “what about privacy” and “why should digital tech mediate reality”, we were most taken with the initial few paragraphs. In short, this article posits that the Apple Watch and the iPhone have already made us into cyborgs, and frankly, we agree.
Consternation about how technology changes human cognition goes all the way back to early philosophy, where traditional scholars argued that writing anything down meant that you didn’t really know the subject, for true understanding needed to happen completely within the mind. We have since found numerous ways of offloading our memories, from commonplace books to photo albums to journals and now to Google searches and nearly-infinite storage of email correspondence. Each new advancement in technology has meant that we need less of our brains dedicated to simple memory.
Now, with GPS, online maps and directions, and contacts that follow us from device to device, there are new areas of cognition we can give up, leaving more space in our minds for creativity, leisure, or even nothing. Forget brain implants and body modification; many people already lean on devices to augment their brains without all that. With new AR glasses tech on the horizon, it’s increasingly likely that you’ll be a cyborg (if you aren’t already!) without any of the pain or complications from surgery.
With its iPhones, watches and forthcoming smart glasses, Apple’s gadgets are increasingly becoming extensions of our minds and bodies.
6: Copyright for virtual celebs
As anyone who’s ever worked with a lawyer knows, the details of your agreements are very important. What seems like nitpicking in the moment will definitely be the thing that saves you later on.
This story in The Verge describes just such a transaction, but with a twist: the intellectual property in question is a streamer’s virtual avatar, which she uses to stream games on Twitch, create posters and t-shirts of her likeness, and even pose on PornHub. This “body” was created by an artist who, while paid for the initial design work, has filed a copyright complaint against the streamer, getting her temporarily banned from Twitch.
Ownership of creative property is an extremely sticky legal area, and this virtual avatar reminds us of Emily Ratajkowski’s essay in The Cut about how a model often doesn’t have creative control over how her image is used. What does it mean to “own” one’s body or likeness, and does that ownership model change when that likeness is purposefully constructed? Perhaps more to the point, is there really much difference between the creation of a virtual avatar and the curation of a celebrity’s public persona?
Projekt Melody was banned from Twitch when an artist filed copyright strikes claiming her body didn’t belong to her.
One secure gift
Looking for holiday gifts for your privacy-focused friends? Mozilla has put together this buying guide for connected products that protect your security and privacy. Each product review includes info on whether it can snoop on you, what data it collects, and the most important section, “What could happen if something goes wrong?”
Smart home gadgets, fitness trackers, toys and more, rated for their privacy & security