Hearing what we see & seeing what we think

Agency is a critical component in successful innovations, namely: does this give its user or recipient more control over their life? Several of the signals we saw this week deal with agency: who has it, how they get it, and what they can do with it. We also heard other positive signals about the development of AI, computers that can run until they fall apart, and systems that can read your mind.

—Alexis & Matt

1: Permissionless identities

Who are you and what do you do? These two questions are tightly coupled for many of us, in that our identity is largely shaped by our work. Historically, that “work identity” has been determined by your employer and your job title. But this essay by Tom Crichtlow explores how the way we define identity has changed from a permission-seeking system to a permissionless system: “People used to think they needed permission - so they would ask for somebody else to give them permission to advance, to be something different - a new job title, a new degree, a new certification, a new membership.”

But the internet provides methods that allow us all to iterate and explore new identities in a more fluid and self-directed way. Specifically, Crichtlow calls out networked writing — starting a blog or a newsletter, for example — as a way of being able to “act your way into” new version of yourself, by exploring ideas in public and by connecting to others with similar interests. (Which also reminds us of another recent essay entitled “Writing is networking for introverts”.)

This kind of self-directed identity creation has become especially critical in times where so much is fluid, unstable, and insecure. It’s unlikely that most of us can reliably depend on a company to provide long-term career definition and growth in the same way that previous generations could, so we need to find ways of framing our work and our selves that is more self-reliant. As Alexis’s mentor John Maeda once told her, “To institutions, you are always expendable”. Crichtlow talks about creating those identities for ourselves via narrative institutions: “projects, websites, businesses, side projects, hobbies or activities that you can lean on for stability. While formal things like career, job description or professional label are in flux we can rely on our narrative institution to provide stability.” (In fact, Ethical Futures Lab is a perfect example of one such narrative institution!)

LF10 - Permissionless Identities

How the networked age is reinventing our careers


2: Expanding the definition of 'active speaker'

Researchers at Google have developed a video processing algorithm that can notice when a person begins speaking in sign language, and when they have finished. Given how much time many of us are spending on video calls for our jobs lately, this innovation helps people who sign have equal presence in conversations with speaking participants.

The implementation described in their paper is quite clever, and attempts to create a system that will work with any video conferencing software without new code. The system will monitor a user’s body movements, and when its model determines the person has begun signing, will emit a sound outside the range of human hearing but audible to computers. This sound triggers the “active speaker” algorithms within Zoom, Meet, Teams, or whatever, moving the focus of the meeting to the signer. This clever approach is much more likely to find rapid adoption in that it works with existing systems’ logic, rather than requiring every videoconferencing application to update its software.

We find this particular innovation to be well-thought out, in that it empowers the signer to interrupt, just as any speaking participant can. This is a far better solution than “hand raising” or other permission-based approaches that rely on a speaker to take notice and grant focus.

Google research lets sign language switch ‘active speaker’ in video calls

An aspect of video calls that many of us take for granted is the way they can switch between feeds to highlight whoever’s speaking.


3: Using humans to benchmark AI

Facebook’s AI research blog recently posted an overview of Dynabench, their new framework for testing AI software. This kind of benchmarking has historically been done by machines — using data sets like GLUE or ImageNet to test how well an AI model can do things like understand language or identify images. But systems that pass these benchmarks still show significant flaws and gaps when evaluated in real-life scenarios by humans. In short, it’s really hard to have a thorough computational test for AI.

Dynabench uses humans in the loop, to measure how easily AI systems are fooled by humans, which is a better measurement of the model’s sophistication. “Human annotators try to find examples that fool even state-of-the-art models into making an incorrect prediction. For example, the annotator could deliberately write a positive restaurant review — “The tacos are to die for! It stinks I won’t be able to go back there anytime soon!” — so that the model might misunderstand and then miscategorize this as a negative review.”

Introducing Dynabench: Rethinking the way we benchmark AI

We’re sharing Dynabench, a first-of-its-kind platform for dynamic AI benchmarking.


4: Ethics of genetic augmentation

Roughly 1 in 1,000 children are born with deafness caused by their genes; in many cases it is inherited from parents through a recessive pairing, but in some it can be a random mutation. Researchers and physicians in several different institutions are investigating gene therapies that can alter patients’ genes, giving significant improvement in their ability to hear.

An ethical issue arises: these therapies are best applied when the patient is extremely young, before they begin to develop language, and when they are far too young to understand or consent to the treatment. This is often true of cochlear implants and other technical interventions — cochlear implants are now available to children as young as nine months — but the promise of a genetic therapy has some in the deaf community concerned about the loss of culture and language that could come from a stigmatization and rejection of the condition.

This debate also extends beyond therapy toward augmentation. If a gene therapy can be found that would reverse deafness, could one be found to speed a metabolism? Enhance low-light vision? Select for eye color? Increase the probability of being taller than the norm? Since gene therapies are most effective on the young, particularly when cognitive capabilities are in play, what are the ethics of making these decisions for people who are too young to consent? What do we lose, or gain, as a culture if we alter our abilities in this way?

Gene Therapy Could End Deafness. Should It?

When Jessica Chaikof was born in February 1995, doctors at an Atlanta hospital placed a pair of headphones on her…


5: A future without batteries

Two different lines of research are converging to create new capabilities long imagined in computing. First, energy-harvesting technology gives the ability to extract electric charge from a variety of everyday sources, including the light in your room, the heat in your hand and the motion generated when you type. Many of these technologies have existed for some time, but they are now met with a new partner: ferro-electric RAM. This type of memory chip, still quite experimental, can store information without power while retaining RAM’s extremely fast read and write times. Paired with ultra-efficient processors, these technologies combine to create “perpetual computing” machines.

These are not the mythic “perpetual motion” machines that generate their own power, but instead, computing devices that can lose power, store their state, and when power returns, pick right up where they left off without rebooting or starting over. Researchers built a Game Boy device with solar panels and power-capturing buttons to demonstrate these capabilities; if the machine loses power, you can simply mash buttons to generate enough energy to resume your Tetris game right where you left off.

These innovations, when taken together, open up a whole new area of low-power computing devices that require no direct power and have no batteries, often the heaviest and most environmentally-damaging component. The first likely use for this tech will be in sensing: sensors powered by movement or other environmental forces could be embedded in places computers can’t currently go, like into the freshly-poured concrete of a bridge, where it could track and relay strain data for the lifetime of the structure.

Battery-Free, Energy-Harvesting Perpetual Machines

A new breed of computers could run forever—or at least until long after we’re gone.


6: A picture is worth 1,000 neurons

Most brain-computer interface (BCI) research thus far has focused on the ability to execute limited actions based on brain signals, such as moving a cursor on a screen or typing a letter of the alphabet. But researchers at the University of Helsinki have developed an AI system that can generate images of what a person is thinking based on their brain signals.

The system uses a generative adversarial network that was trained with neural activity from participants who were asked to focus on different images in a training set. The GAN is then paired with “neuroadaptive generative modeling”, which can adapt a generative model to neural activity to understand a person’s intention. We’re not sure exactly what future possibilities this uncovers, but could imagine an expansion of the visual language (memes, gifs, emoji) we use today coupled with more immediacy and virtuosity.

New Brain-Computer Interface Transforms Thoughts to Images

The University of Helsinki uses AI machine learning and GANs for a brain-computer interface that can imagine what you’re thinking to create new images.


Is this sushi too hamachi-matchy?

Hold on to your hats, there’s a lot going on in this one. Sushi Singularity (that name!) is a Japanese restaurant that requires you to submit a biological sample when you make a reservation. Your genetic data is then used to custom design a bespoke sushi meal with a tailored set of nutrients. That meal is not only tailored based on your genetic makeup, but is also custom constructed using a combination of 3D printers and CNC machines. Kanpai!

Sushi Singularity makes a bespoke dinner based on your bodily fluids

Sushi Singularity makes a bespoke dinner based on your bodily fluids

Japanese studio Open Meals has announced a restaurant concept called Sushi Singularity that uses a customer’s faeces to create bespoke 3D-printed sushi.


By Ethical Futures Lab

Six Signals: Emerging futures, 6 links at a time.

Tweet Share

If you don't want these updates anymore, please unsubscribe here.

If you were forwarded this newsletter and you like it, you can subscribe here.

Powered by Revue