Future #48: All the feels
This week’s signals focus on emotions — how they can be used for foresight, how they might be manipulated for profit, and how they might be misinterpreted by machines. We also look at adaptation, reviving the question of whether our technological systems should adapt to us, or if we should adapt to our systems. And there’s a very funny robot at the end.
—Matt & Alexis
1: Fetch, roll over, beg for an upgrade
We’re always excited to see new work from Kate Darling, and this excerpt from her new book, The New Breed: What Our History with Animals Reveals about Our Future with Robots, does not disappoint. In it, she looks at the current state of emotional manipulation for corporate gain and extends it into a near future with companion robots to whom we are emotionally attached. There are also some interesting asides about Victorian dog kidnapping and the wedding industrial complex. This excerpt sums it up nicely, but we recommend reading the whole thing and are looking forward to the book!
“Woody Hartzog paints a science-fictional scene in a paper called “Unfair and Deceptive Robots” where his family’s beloved vacuum cleaning robot, Rocco, looks up at its human companions with big, sad eyes and asks them for a new software upgrade. Imagine if the educational robots sold as reading companions for children were co-opted by corporate interests, adding terms like “Happy Meal” to their vocabularies. Or imagine a sex robot that offers its user compelling in-app purchases in the heat of the moment. If social robots end up exploiting people’s emotions to manipulate their wallets or behavior, but people are also benefiting from using the technology, where do we draw the line?”
→ On what emotional attachment to robots might mean for the future | Literary Hub
2: Looking backward to look ahead
Friend of EFL Christopher Butler puts words to something we’ve done but never articulated in this lovely essay about proretrospection, or Emotional Futurism. We naturally tend to approach futurism with a forward-looking lens — we see patterns in the present and project them forward into possible futures. When we’re being extra thoughtful, we’ll sometimes put ourselves in a future position and imagine how we got there, a practice sometimes called a pre-mortem. Both of these approaches tend to be highly analytical, driven by pattern matching and systems thinking, and imagining the mechanics of arriving at a future.
Christopher suggests placing ourselves in the viewpoint of a potential future and imagining looking backwards from there, but with emotion at the forefront. How do we feel in this future state? What do we regret? What flavor of nostalgia presents itself as we look back at our present/past selves? We often find ourselves fascinated by the way perspective shifts with time. It’s striking how impossible it is to identify the moment when a vibrant, modern present era transitions into a sepia-toned, fixed past, or which of life’s moments will be memorable touchstones while we’re living them. We love the idea of engaging with this emotionality more directly and using proretrospection as a means of not only uncovering what futures are possible, but assessing which ones are desirable.
→ Think about the future's past | Christopher Butler
3: The ouroboros of online reputation management
Aaron Krolik and Kashmir Hill dive deep into the online “slander industry” in this investigative New York Times piece. They begin by posting a piece of fake misinformation about Krolik on one of many websites set up to destroy reputations. These sites seem amateurish at first, but rumors posted on one site quickly copy and multiply across this ecosystem, giving them enough weight to quickly start influencing a person’s search engine results. The journalists scraped more than 150,000 posts about 47,000 people and then checked those people’s search results — for the majority, the gripe sites showed up at the top of their search results or image search results. What gives these postings a further sheen of legitimacy is the summation of relevant information that many search engines now provide at the top of any search results, so that Google may announce in bold type that you are “a liar and a loser” at the top of your search results, in the same way it might announce that Chuck Schumer is the Majority Leader of the US Senate.
What do you do if this happens to you? Well, you’re in luck. All of these gripe sites are rife with advertising for reputation management companies that claim to get troublesome posts removed in order to restore your online reputation. BUT. (Of course there’s a “but”.) Hill and Krolik did some very impressive digging — honestly, the tick tock of investigative reporting is reason enough to read the article — and they discovered that not only do those services charge extortive rates, but they are also mostly owned by the same people who run the slander sites themselves. In other words, there is an entire online industry that profits from destroying people’s reputations by turning around and charging them huge amounts of money to fix the problem they created.
→ The slander industry | The New York Times
4: Yayagram & flexible interfaces
Both of our futurist friends called Christopher (Kent and Butler) sent this fantastic project our way. Manuel Lucio Dallo created an interface called Yayagram as a way for his 96-year old grandmother (“yaya” means grandma in Castilian Spanish) to be able to communicate more easily with her family. It is a physical device that is meant to be reminiscent of old-school telephone switchboards, but which is connected to the Telegram app on the backend. His grandmother can plug in a jack to pick a recipient, then hold down a physical record button to send a voice message. And since she is hard of hearing, when she receives messages, they print out on a thermal receipt printer. We love the idea of having a single piece of software (in this case, Telegram) that might have a wide variety of interfaces for different types of users. Imagine if—instead of everyone having to adapt to a single UI—there could be multiple interfaces for people with different needs, contexts, abilities, aesthetics, etc? These could be a fixed set defined by the software developer, or it could be a very open-source style kit of parts that could be remixed by users themselves.
5: Who adapts?
If you love to be infuriated by Elon Musk, you’re going to want to read this article, aptly entitled “Elon Musk shares painfully obvious idea about the difficulty of self-driving cars”. That idea? Basically, that it’s hard for computers to drive because roads were designed for humans.
“Musk would love to rip up the highway system and build something that was easier for his Tesla cars to identify and drive on safely. But that’s not the problem that confronts him. That’d be like robot manufacturers demanding that all new buildings only have one-floor and no stairs because most human-style robots have difficulty climbing up stairs. The old Darpa Robotics Challenge had real-world obstacles for a reason. We want robots to adapt to our world, we don’t want to change our world to make the robots more comfortable. As journalist Kelsey Atherton put it, ‘Tech interprets humans as flawed and seeks to route around them.’”
Reading this piece once again demonstrates that Musk seems to get excited about creating a seductive vision of the future, while showing little interest in how he might need to engage with existing cultural or infrastructural contexts in order to make that future plausible.
6: Grin and bear it
This deep dive from Kate Crawford delves into the history surrounding the science of facial expression. Long story short, it turns out that there is pretty tenuous scientific basis for assuming that facial expressions can universally be used to determine human emotions. The science on the matter is weak, with a lot of issues around methodology, and there is evidence that facial expression as a signal of emotion can vary significantly amongst cultures and subcultures. The problem, of course, is that there is a huge industry being built on top of the idea of “affect recognition” that is already being deployed across many contexts, and may determine everything from whether you’re a terrorist to whether you’re suitable for a job.
→Artificial intelligence is misreading human emotion | The Atlantic
One hilarious robot
“Unassuming” and “sneaky” are definitely not the adjectives we would use for this bouncy boi.