Discover more from Ethical Futures Lab
Future #63: Our cyborg selves
There is an important thread woven through this week’s signals, one that speaks to our uncomfortable dance with technology. We discuss how we might want robots to care for us, when it’s appropriate to offload ethics to machines, and how it feels when everything is an avatar for the real thing. We are in the process of making a reality where we are continuously augmented in one way or another. In order to do so, we need to understand where we want our human selves to end and our cyborg selves to begin.
—Matt & Alexis
1: Web3 as the logical endpoint of capitalism
Every once in a while, you read a piece of writing that makes a jumble of ideas in your head suddenly click into place. This analysis of Web3 by our friend Ian Bogost had that effect. It clarifies what Web3 represents in the evolution of the internet — and in doing so, also highlights what is so discomfiting about it.
Bogost reframes Web 1 and Web 2 through a financial lens, in order to better explain the current shift. Web 1 represents “marketization”, where the traditional business of buying and selling physical goods was ported to a digital environment, with the rise of sites like Pets.com, eBay, and of course, Amazon. With Web 2, the rest of the web — the non-commercial activities like blogging and photo sharing — became “monetized”, where advertising was used as a means of extracting value from attention and engagement. With Web3, we are moving beyond marketization and monetization to securitization, a process by which every digital object and action is a potential financial asset on which one can speculate and trade.
“First the internet made it easy for people to conduct their lives online. Then it made it possible to monetize the attention generated by that online life. Now the digital exhaust of all that life online is poised to become an asset class for speculative investment, like stocks and commodities and mortgages.”
Imagine all of the greed and exploitation of the stock market, but with no safeguards against shady practices, and extended into every nook and cranny of your digital life. It’s being hyped as the future — and it may well be — but it’s a far cry from the communal utopianism of the early web, and is even devoid of the layers of foundational connection and meaning in the web’s second wave. It’s explicitly just about the money. As Bogost puts it, “now the wealth seeking is printed on the tin”.
→ The internet is just investment banking now | The Atlantic
2: Robots and the rituals of care
As we consider a near future where robots are deployed as caregivers, such as for the elderly or ill, how do we define what it means to care for someone? In this thoughtful essay, Annabelle Johnston unpacks what constitutes care, and whether robots can be capable of effectively doing so. Many critics of care robots construct the idea of care as one necessitating mutualism — a reciprocal emotional experience that is beyond the abilities of an artificial being. Without that mutual exchange, the concern is that those cared for by robots will be stripped of a necessary aspect of support.
But Johnston argues that the ritual of care may be more important in creating a sense of being nurtured than the emotional engagement of the carer. She points to the ways in which we already form emotional attachments to “relational artifacts” from the Tamagotchi to Aibo to Siri, in ways that aren’t reliant on the interiority of those beings. She also reframes the idea of a robot as more of a cyborg (in the Donna Haraway sense), where they can extend the human ability to care beyond the limits of human bodies:
“The robot is a media form that allows humans to care for each other across distances of time and space…Its execution may be limited and biased by that process of mediation, but it does not necessarily negate the care it makes manifest.”
While there are certainly complexities to the reality of robotic care that can’t be reduced to a “good” or “bad” assessment, this piece is meaty food for thought that might expand your perspective on the issue.
→ Routine care | Real Life Mag
3: Delegation as an act of trust
We talk so much about trust in our work; how we create it, where we need it, and how we rebuild it. Web3 and cryptocurrencies posit a transaction without trust: the writing down is all, if you assume the medium is immutable. As we’ve seen from countless examples, however, that trust has simply been shifted from large institutions with verifiable histories to independent developers and sites that have broken trust so often they’ve created new vocabulary (“rug pull” being one we’ve seen a lot recently.) We also talk about trust, albeit more obliquely, when we talk about delegating decision-making to a machine-learning algorithm. We often hope that the objectivity of the technology will yield better results than our subjective human decision making. This incisive piece by Jenny Zhang pulls apart the concept of trust, particularly as humans relate to computational systems.
The trouble with trust in these situations is a lack of nuance. Encoding information for consumption by a computer requires simple signals. When an AI is trained it starts from zero and is fed thousands of data points that are meant to teach the machine how to parse signals into understanding. This may be enough for a machine vision algorithm to recognize a stop sign, but not nearly enough to render a judgment about how likely a formerly incarcerated person is to commit a crime, or whether a particular borrower is a good credit risk. As Zhang beautifully states:
It’s not clear to me whether any computational system dependent on the concept of ground truth can be compatible with the rich context of human relationships. It’s not clear to me that we should want it to.
Zhang connects this thread to a deeper question: why do we hope to train machines to do those tasks we, as humans, are not only better at, but deeply enjoy? Could we reduce or eliminate this trust imbalance by simply delegating those tasks we can easily trust to the machine, thus freeing up all kinds of creative time and energy in humanity?
We are so excited by the idea of machines that can write, and create art, and compose music, with seemingly little regard for how many wells of creativity sit untapped because many of us spend the best hours of our days toiling away, and even more can barely fulfill basic needs for food, shelter, and water … I can’t help but wonder how rich our lives could be if we focused a little more on creating conditions that enable all humans to exercise their creativity as much as we would like robots to be able to.
→ Morals in the machine | Jenny Zhang
4: Religion, technology, and innovation
We don’t often consider the interplay between technology and religion, but this interview from The Institute for the Future highlighted ways that innovation practices have made their way into the church. The interview is with Reverend Lorenzo Lebrija who founded TryTank, a laboratory for innovation in the Episcopal Church. He is applying design thinking and foresight practices to religion, and has even written a book entitled Design Thinking and Church Innovation. He takes typical design and innovation approaches to experiment with everything from pop-up mall monasteries to PriestBots. In addition to applying these practices himself, TryTank is also setting up innovation hubs with 400 partner congregations to try and galvanize this kind of experimentation in the broader community.
→ Rise of the PriestBots: Innovation at the edge of faith and meaning in the Church | Institute for the Future
5: Lighting a path to the future
We’ve often looked to science fiction and speculative fiction as tools for illustrating what’s possible, allowing us to either build towards such a future, or to veer away from an undesirable path. For a long time, it seemed like the latter was the more necessary role. There has been no dearth of technological cheerleading in our society, with shiny future visions of prosperity being propagated by everything from The Jetsons to Apple commercials. So the fictional narratives that took a more dystopian or critical perspective were necessary ways of understanding the unintended consequences of the technologies we are building.
But the world has shifted, and what we need from our science fiction writers may also be changing. This New Yorker profile delves deep into Kim Stanley Robinson’s work, especially as it relates to the climate crisis. In contrast to sunny technological optimism, the mood around climate change can often tilt beyond pessimism to a kind of nihilism, where we doubt our ability to affect the kind of change that is needed to substantively alter the outcome for humanity. In that context, what’s necessary is not a critical perspective, but a galvanizing and hopeful one that illustrated realistic remedies. Robinson discusses the need to balance on a razor’s edge between dystopia and utopia in order to light a path forward that is plausibly optimistic:
“He is especially impatient with those who urge giving up when giving up is against their best interests. What he seeks to practice is, in a phrase popularized by the Marxist philosopher Antonio Gramsci, ‘pessimism of the intellect, optimism of the will.’”
→ Can science fiction wake us up to our climate reality? | The New Yorker
6: The uncanniness of the everyday
If you’ve felt like the world’s been a little off for the last couple of years, you’re not wrong. Leaving aside the confusion, frustration and fear that the Covid pandemic has created, we have also all been subjected to far more “virtual” experiences than we would have previously. Foremost is how much of our communications happen in mediated settings; our pictures (whether truly of us or something funny we use to stand in for ourselves) sit next to the texts we write and the Slack messages we post. Our faces appear in grids in virtual meetings, lacking depth and nuance, hiding our true heights and flattening the topology of our features. And lately, we’re presented with images of how we might interact in virtual space, with “legless, anime-eyed, gummy-torso” versions of ourselves floating through a corporate campus.
This feeling, Christopher Butler argues, is that of the uncanny valley. While it’s typically used to describe a person’s uneasy feeling around an artificial life form that is both too close and too far away from appearing real, Butler adapts and expands on this definition in a compelling way. The avatar experiences described above feel uncanny, not because that face in a circle is too close to my own, but because the experience we have in these flat planes can almost approximate the intimacy and social connection we felt in person, but not quite. It’s that “not quite” that contributes the unease.
He closes by expanding on this idea and asking designers to consider the direction in which intimacy and trust are created: are we creating a truly new social interaction, or clumsily recreating an existing one in a new medium? Recently, Anne Helen Petersen analogized what it’s like to work and socialize remotely, to how People Magazine first engaged with the web: they simply scanned the pages of the magazine and made the images available on AOL. Both Petersen and Butler are encouraging us to make better use of new interactions, and rather than mimic what came before, create something that could only exist now.
→ We live in the uncanny valley now | Christopher Butler
One ______ for _________
Fun, generative language toys on the internet!
Any opinions expressed are those of Alexis Lloyd and Matt Boggie, and do not reflect on the policies, plans, or beliefs of their employers or any other affiliates.