Future #64: Bridges and boundaries
This week’s signals speak to ideas of togetherness — from recommendation systems that can forge connections rather than division, to apps that aim to enhance relationships (in very misguided ways). But we also look at where we actually need more boundaries — in order to build trust in new ecosystems, or to prevent context collapse around our identities. Plus, there are some hilariously stuck robots.
—Alexis & Matt
Algorithms for harmony
Algorithmic recommendation systems have long been criticized for the kinds of behavior they can tend to incentivize. Specifically, the most common type of recommendation system is one that is based on engagement — you’re more likely to be shown content that tends to elicit clicks, views, and shares. The issue with this approach is that the content driving the most engagement often tends to be highly sensationalist or divisive, thereby incentivizing outrage, misinformation, and ultimately causing societal harm. None of this should be news to you if you’ve been online at all in the last several years.
This piece from Aviv Ovadya suggests that, rather than discard the idea of algorithmic recommendations altogether (as some would propose), we might be able to encode new kinds of incentives into those algorithms. Specifically, rather than engagement-based recommendations, Ovadya proposes bridging-based ranking. The idea is that a system could understand which content was more likely to be divisive, driving further division between groups, and which content was actually decreasing those divisions:
“Another way to think about this is that engagement-based systems can be thought of as a form of ‘centrifugal ranking’—dividing us into mutually untrusted groups; in contrast, bridging-based ranking is a kind of “centripetal ranking”—helping re-integrate trust across groups. If we can do it well, the implications are staggering. Bridging-based ranking would reward the ideas and policies that can help bridge divides in our everyday lives, beyond just online platforms.”
While such a system has yet to be tested in practice, we love the idea of exploring how other kinds of incentives might be encoded in algorithmic systems. This approach also seems feasible in that the behavior it rewards can reasonably be understood by machine learning methods. What other kinds of behaviors might be both societally beneficial and computationally legible, and how might we encode those to lead to better outcomes for technology and humanity?
→ Can algorithmic recommendation systems be good for democracy? | Tech Policy Press
Aging gracefully, crashing spectacularly
It’s disappointing enough to have a piece of beloved tech break or fail. Sometimes you drop your phone in the street where it gets run over; other times, it’s your favorite game console becoming way less fun because its online services were decommissioned. What would your reaction be, though, to a part of your own body suddenly shutting down due to lack of support from its manufacturer?
Users of retinal implants made by a company called Second Sight are struggling with just that eventuality. Hundreds of people are using Second Sight implants to have some rudimentary form of sight, and due to the financial collapse of the company, are faced with either supporting their devices on their own with scavenged parts, painful and potentially risky surgery, or a future with a dead piece of technology lodged in their heads.
This story goes deep into the history of the company and the promise of bionic implants targeted at all kinds of conditions, but fundamentally, raises an important and unanswered question: what obligations does a company assume when they sell things that become a part of your body?
→ Their bionic eyes are now obsolete and unsupported | IEEE Spectrum
On the Internet, no one knows you’re an ape
Earlier this month, BuzzFeed published a story revealing the two people behind the Bored Ape Yacht Club (BAYC) series of NFTs. This would ordinarily not be much of a story; typically, founders of hot new startups want attention and publicity, but since we’re talking about NFTs and cryptocurrencies, the anonymity of the creators of BAYC was fundamental to its position in web3. One of the founding principles behind cryptocurrencies and blockchain tech is that the participants can remain anonymous, and only reveal what they choose to reveal.
From this single anecdote, the author delves into the identity of Web3 itself. Is it about freedom? anonymity? financial speculation? a trustless future? Does it encompass virtual and augmented reality, or is it just about crypto? Is it for a more equal future where one’s race or beliefs are irrelevant because they’re obscured, or is it about a privileged set of mostly white tech bros pining for the days of the early web and getting more rich in the process? Even its strongest proponents don’t agree, and that may be its most defining and enduring characteristic: the one thing web3 is really good at is starting arguments.
Cryptocurrencies promise a future of complete anonymity and the removal of trust as a necessary step in a transaction. That may be true for trust in the transaction itself — replacing a centralized bank with a computational ledger — but without visibility into the buyer or seller, how can we trust that the goods we’re purchasing are safe, or that the information we’re reading is true, or if the investment advice we’re being given isn’t guided by ulterior motives?
→ Bored apes, BuzzFeed, and the battle for the future of the internet | Motherboard
No really, how deep is your love?
This essay continues our recent thread of “some things shouldn’t be optimized”, with a critique of relationship apps. Not dating apps, but apps that purport to help couples maintain and enhance their relationships. Many of these apps promote the idea of relationship hygiene or fitness, tracking behavior in a way that treats intimate relationships as just another aspect of one’s life to be rigorously quantified and optimized. Love Nudge, for example, tracks how full each person’s “love tank” is and prompts users to complete a certain number of tasks (like “touch arm”) to fill up their partner’s tank.
“Like period-trackers, step-counters, sleep apps and the world of health tech more widely, relationship apps operate on the premise that self-knowledge looks like a data portrait. They position intimacy as something that can be “hacked,” gamifying the experience of building closeness and promising a certainty that rarely maps onto the way that lives actually unfold.”
Sadly, these kinds of techniques seem to be the predominant approach to relationship apps. But there are some more interesting signals out there — ambient apps like Locket, for example, provide a subtle mechanism for creating moments of connection that don’t require dashboards or KPIs. We’re curious to explore ways that technology could actually promote intimacy without making our relationships subject to even more measures of productivity.
→ Silent Partner | Real Life
I contain multitudes
If you’re one of the millions of people who’s ridden out the pandemic working remotely, you’ll know the feeling described here by the phrase “the great smushing”: the frictionless transition between work, friends, family, and personal frames of your being. It happens when you lose attention in a virtual meeting and start shopping for school supplies, or when you rush to help get Zoom school working again while on the phone with a client.
The boundaries between different aspects of our lives have been eroding for years, particularly since the advent of smartphones and the “always available” expectations of certain careers. For many, the pandemic accelerated this decay, collapsing all our various selves into one seemingly overnight. The result is that we often feel like we’re not doing enough in any area of our lives, because we find we can’t concentrate long enough on one thing to make real progress. As Zak Jason writes more eloquently: “Showing up for each person in your life requires showing up in many different ways.”
You may be a parent, a spouse, a boss, an employee, a Muslim, a basketball fan, a fly fisher, and a dozen other versions of you. The act of being all of those equally at once diminishes each identity. Where previously we benefited from natural breaks between contexts — the office, the bedroom, the gym, the mosque — Jason now argues that we need to create our own versions of those borders between our various selves to give them each the space and attention they need to thrive.
→ Welcome to the great smushing | WIRED
Chatbots that aren’t a️︎︎️◼︎◼︎holes
This piece from MIT Tech Review has the promising headline “How to make a chatbot that isn’t racist or sexist”. Spoiler alert: the article does not come up with any clear solution to the problem. Instead, it runs through the various approaches to mitigating chatbots’ tendency to veer into offensive conversation. Some of those approaches are as simple as ex post facto removal of offensive language (similar to “bleeping”). Others take a more proactive approach by building removals into the model itself so that potentially offensive topics are never “learned” by the system, or by crafting rule-based responses that circumvent tricky subjects.
The overall conclusion of the researchers interviewed in the article is that offensive speech is an inherent problem for large language models — that “internet-trained models have internet-scale biases”. Given that context, it is likely that companies and creators will have to weigh these risks in assessing whether to employ chatbots. While occasionally offensive commentary or untrustworthy content may be an acceptable risk for some scenarios, we’re seeing chatbots being used in a wider variety of contexts — like medical support — where the tolerance for such failures should be very low.
→ How to make a chatbot that isn’t racist or sexist | MIT Technology Review
Many stuck robots
Any opinions expressed are those of Alexis Lloyd and Matt Boggie, and do not reflect on the policies, plans, or beliefs of their employers or any other affiliates.