In vitro brains, ethical offsets, and (of course) MIT
After a week off spent driving 3200 miles across the U.S., from Los Angeles all the way to Brooklyn, we are back with six new links for you! We’ve got in vitro brains, voice assistants powered by people, and more. But first, let’s start with the elephant in the room — namely, the developing MIT Media Lab story and its implications for the broader contexts we need to consider when we talk about the ethics of technology.
1: Epstein, Ito, Negroponte, and basic decency
As we were preparing this week’s issue, Ronan Farrow and The New Yorker published a deeply reported story about the lengths to which the MIT Media Lab went to conceal Jeffrey Epstein’s ongoing contributions . After a week peppered with revelations and confrontations, this story reveals what we all feared: members of the Lab – Nicholas Negroponte and Joi Ito in particular – knew what they were doing was reprehensible and did it anyway.
Since the focus of the Ethical Futures Lab is to focus on the ethics of technology and its applications, we knew we needed to start here this week. That said, unlike most of what we share with you, the ethics here are unsubtle and uncomplicated. If you’re doing something that you feel you need to hide or lie about, that you know you’d be pilloried for if it came to light, you’re acting unethically. If you find out the organization for which you work is acting unethically, it’s your duty to call attention to it (as Kate Darling, Ethan Zuckerman, and Nathan Matias have so bravely illustrated.)
Plenty of people have given thoughtful analyses and responses, so we will suffice it to say that if we can take anything from this awful situation, it’s this: When we consider technology and ethics, we need to go beyond the implications of the technology itself and also look to the economic and institutional contexts in which those technologies are developed. Funding structures and organizational power are equally at play when we consider how to develop and design tech that is beneficial for society.
How an Élite University Research Center Concealed Its Relationship with Jeffrey Epstein
Ronan Farrow on new documents that show that the M.I.T. Media lab was aware of Jeffrey Epstein’s status as a convicted sex offender, and that Epstein directed contributions to the lab far exceeding the amounts to which M.I.T. has publicly admitted.
2: No such thing as an ethical offset
So, what do you do if you find out your company is acting unethically? If you’re working on a product you know to be potentially dangerous or damaging, can you still contribute to it and use your spare time / money to give back to good causes in other ways? This idea of an “ethical offset” is common — one might argue Epstein’s gifts to MIT and other institutions are examples of using money to burnish a reputation or assuage guilt. Mike Montiero doesn’t buy it.
In this piece (which borrows concepts from his book Ruined by Design) Mike states it plainly: if you want to do good, start with your job. As he states more extensively in the book, he sees designers as the gatekeepers to what gets made, and thus can say “no” to bad ideas. Point out what better solutions would look like and advocate for them.
“Only you can clean up the place where you work, and if you want to take a stand, if you want to make a difference, it needs to start at the place where you draw your paycheck. Because if you are earning a living somewhere that makes the world a worse place, there is absolutely nothing more important you can do than take a stand right there.”
Dear Designer: If You Want to Make the World a Better Place, Start With Your Day Job
If the company you work for is doing something shady, it’s not enough for you to donate to charity. You can’t offset all that bad with just a little good — you have to take a stand.
3: Voice assistants & the legibility of systems
By this point it shouldn’t be surprising that “intelligent assistants” rely on human review and interaction to function. As far back as the launch of Facebook Messenger, and it’s I-can-do-anything bot persona “M”, it’s been clear that humans are required to train, correct, and refine machine learning algorithms, especially when they are user-facing. Still, when Facebook was the latest provider of virtual assistants to admit that employees were listening to select recordings (Microsoft, Amazon, Apple and Google all being found out prior) the news was received with shock and indignation.
The key question here isn’t “why are people listening to what I say around my Google Home”, but “why wasn’t Google comfortable telling me that”? Imagine if, when opening the box containing your new virtual assistant, you were greeted with a well-designed fact sheet that explained how the system actually worked? It wouldn’t have taken much:
“This device will automatically recognize your speech and respond to your request without human intervention. From time to time, we will send selected recordings to people for transcription; we will do this no more often than once a week, and do so to improve our system and increase the number of inputs it can understand. If you want to opt out of this collection, whether temporarily or permanently, just move this switch to the off position.”
Alexis has explored these ideas before in her Playable Systems post (and related talk), specifically the concepts of Legibility (making the system’s inner workings understandable) and Human in the Loop (allowing a user to direct changes in the system, override defaults, and otherwise express her intent.) As this Atlantic piece describes, much of the concern about these new participants in our social spaces is not how they actually work, but rather the suspicion they create by acting so opaquely.
Facebook Listening to Users Isn’t Just a Privacy Scandal
The secrecy surrounding AI products makes even basic information about them a scandal.
4: Synthetic intelligence & manufactured brains
For the first time ever, brain tissue grown in vitro has demonstrated electrical activity that looks incredibly similar to the kind of activity exhibited in premature babies’ brains. The lab-grown brain organoids are much simpler in a number of ways than actual brains, so there isn’t yet functional equivalence with how human brains work. But the development has caused both excitement and ethical concerns for scientists. On the one hand, this could point to opportunities to study brain disorders and possibilities for new understanding of neural pathways. On the other hand, some researchers are “genuinely concerned at the proximity of developing consciousness in a tub of culture in a lab. So far, none of the brains show any signs of consciousness, but as the experiment continues, it could be a possibility.”
Brain Waves Have Been Detected Coming From 'Mini Brains' Grown in The Lab
For the first time, brain tissue grown in a lab has spontaneously exhibited electrical activity, and it looks startlingly similar to human brain activity.
5: “Can you convict me now?”
In Denmark, authorities are planning to review 10,000 court verdicts, due to errors in mobile phone data that was used as evidence in those cases. Some of the data was at a lower resolution than needed to accurately determine location, and some data linked phones to the wrong cell towers, which could have potentially implicated innocent people in crimes. The interesting thing here is that you have a system that is optimized for one goal — telephony — that is now being used for a completely different purpose, for which it isn’t designed and which has potentially serious consequences. As the director of the Danish Telecom Industry Association said, “We are not created to make surveillance systems, but to make phone networks. Our data is for our purposes so people can speak together.”
Cell tower locations have been critical to identifying suspects and convicting criminals (among many examples, cell tower pings were key to Adnan Syed’s conviction for Hae Min Lee’s murder). If the science underlying this revelation applies to networks in other countries, which it likely would, it could mean a much larger re-evaluation of convictions worldwide.
Flaws in Cellphone Evidence Prompt Review of 10,000 Verdicts in Denmark
Data offered in court was flawed when technical errors linked some phones to the wrong towers or created a less-detailed picture of their locations, officials said.
6: The moral implications of new choices
In a deeply thought-provoking piece, Fiona McEvoy discusses the limitations of choice in the development of new technologies. Specifically, she points out how opting out of a new technology is, itself, a choice, and often one that comes with real ramifications that didn’t exist before the technology was available. Using the near-term possibility of being able to predict or alter an unborn child’s genetic makeup, she points to the cost of having to make a choice at all, the pressure to conform that new choices can bring, and the responsibility for decisions that is deflected from the creator on to the user:
“Choice creates responsibility. By introducing a new option into the world its use or otherwise now becomes the responsibility of the user. As Dworkin says, “Once I am aware I have a choice, my failure to choose counts against me.” In future scenarios, perhaps we will see a decision to prevent a teenager using a conversational therapy bot as in some way negligent. The same when it comes to surveillance technology in cars that can perceive emotions and alertness.”
One heartwarming thing: A restaurant where you’ll get something, but maybe not what you asked for
We recently stumbled across The Restaurant of Mistaken Orders via Jason Kottke. A set of pop-up restaurants in Tokyo have employed men and women living with dementia as servers. Instead of treating dementia as an affliction to be managed, embracing the condition leads to deeper understanding, unexpected interactions, and a respect for mutual humanity. It’s our hope this framing can extend to other places, where we can celebrate people’s contributions and unique abilities instead of mourning their limitations.
"Restaurant of Mistaken Orders" concept movie
Six Signals: Emerging futures, 6 links at a time.
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue