The future of media & mediating the future

Over the last couple of weeks it’s become unavoidably clear that we have lived, and are living, through significant change. Nearly all our assumptions about “how stuff works” are shifting, and while that’s frightening, it can also be thrilling. This week we investigate signals that show us examples of that change, how to think rationally about this and other crises, and where new opportunities suddenly reveal themselves. Plus, some truly magical AI-generated real estate listings!

—Alexis & Matt

1: The future of audio

We talk a lot about how good design happens inside well-defined constraints, and as this long and well-researched piece from Matthew Ball shows, those constraints are often in flux. When they shift, new opportunities emerge that can be lucrative and damaging all at once.

The piece is far too rich to be summarized in a few short paragraphs, but some key insights stood out to us:

  • “All analysis of the past and future of a given media category must start from the fact that media is technology. This is because technology not only enables content categories, it defines their business models and shapes the content, too.“

  • The 45rpm single dictated song lengths for years. When CDs and Digital distribution expanded songs got longer, but the pay-per-play model of iTunes and Spotify has led to songs shrinking again. After all, why get $1 for a five minute song when you can get $2 for two two-and-a-half minute songs?

  • We’re seeing a trend toward one-word song titles because “labels are encouraging artists to simplify the name of their songs and albums in order to ensure they’re optimized for voice-controlled speakers and touchscreen-based searches”

  • Vertically-integrated platforms can innovate where open standards can’t. Spotify recently announced that users of its Anchor podcasting platform can use any Spotify-licensed song, at any length, in their podcasts. Suddenly a podcast-as-mixtape, a weekly radio show, or even just more familiar intro and outro music is available, but only if you also use Spotify’s distribution.

  • Reality itself is a constraint, and one that is weakening given the increasingly-virtual interactions brought on by Covid quarantines and social distancing. The logistical morass that was the live concert business is suddenly wiped clean, letting artists giving virtual concerts be as creative and imaginative as software will allow.

As George Packer pointed out, society finds itself in a “plastic hour”, and many industries and technologies are poised for radical re-imagination given these new constraints. This detailed analysis points out the impact on just one of these businesses, but begs us to think this clearly about others.

Audio’s Opportunity and Who Will Capture It

The content, business models, and health of every media category is driven by technology. And audio technology has never been so diverse and dynamic.

www.matthewball.vcShare

2: Actually, make me think

We often write here about the way that design decisions shape the constraints and possibilities of how we interact with the world. A recent post by Ralph Ammer riffs on this idea, specifically critiquing the focus on simplicity in UX design. As we interact with increasingly complex systems, a key tenet of design has been to shield users from that complexity with increasingly simple interfaces. While there’s value to this approach in terms of foundational usability, it may not be a one-size-fits-all solution. Ammer argues that oversimplification make us more vulnerable when systems fail, blind to the consequences of our interactions, and less empowered overall:

“Maybe being able to speak a foreign language is more fun than using a translation software. Whenever we are about to substitute a laborious activity such as learning a language, cooking a meal, or tending to plants with a — deceptively — simple solution, we might always ask ourselves: Should the technology grow — or the person using it?”

While the tension between simplicity and complexity may seem binary, we believe there’s compelling space for designers to engage in the interaction between the two. We can look to many historical examples, like the piano, of interfaces that are simple to learn but difficult to master. Alexis has written about the concept of playable systems, which is rooted in the idea that a great experience is one that allows for virtuosity. We can make systems that allow users to engage with complexity without making them feel inherently complicated or unapproachable.

Make me think!

Until recently everyday objects were shaped by their technology. The design of a telephone was basically a hull around a machine. The task of the designers was to make technology look pretty.

ralphammer.comShare

3: The first 5 minutes of the future

In this series from The Institute for the Future, Jane McGonigal introduces a future forecasting game that engages players to think about future scenarios through a very specific lens. The game presents five “unthinkable” future events, from the shutdown of all global communications to extreme weather crises. It then presents you with a set of questions prompting you to imagine the first five minutes of these scenarios in deep detail — your first reaction, how you feel, how you react. It’s based on a clinical intervention called “specificity induction,” which is meant to help people imagine and plan for the future more effectively. Brain scans of people who have gone through this kind of training show increased brain activity around future planning, empathy, and creativity. McGonigal created the game because she believes these skills are urgently needed now and in the near future:

“From the pandemic to climate change, social protest to new technologies of disinformation, mass migration to automation of work… the increasing scale and scope of previously “unthinkable” events means one thing. We need to practice thinking the unthinkable and imagining the unimaginable every single day.”

The first five minutes of the future

The First Five Minutes of the Future is a new future forecasting game developed by Institute for the Future’s Director of Game Research and Development, Jane McGonigal.

medium.comShare

4: Database of awful AI

Giving visibility to problematic or dangerous systems isn’t sufficient to discouraging or dismantling them, but it’s certainly a start. This crowdsourced and curated list of algorithms that make decisions based on bias, or make decisions that could significantly affect someone’s life, is a difficult read but important for inspiring criticism and action.

Some of the most egregious examples include (text is from the awful-ai Github repository):

  • Depixelizer - An algorithm that transforms a low-resolution image into a depixelized one; always transforms Obama into a white person due to bias.

  • PredPol - PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. [summary]

  • Misleading Show Robots - Show robots such as Sophia are being used as a platform to falsely represent the current state of AI and to actively deceive the public into believing that current AI has human-like intelligence or is very close to it.

The repository also features two areas—Contestational research and Contestational tech projects—documenting efforts to challenge these AI systems and force different behavior. Using our “three A’s” framework of responses to problematic technology, the tech projects illustrate a range of adversarial techniques, empowering people to subvert the intent of an algorithm, make themselves invisible to a system, or introduce noise into a system to make it fail.

GitHub - daviddao/awful-ai

😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness - daviddao/awful-ai

github.comShare

5: The what and why of "deepfake acting"

While the techniques used to make deepfakes (AI-generated fake videos of real people) are getting more and more sophisticated, most of them still require human actors to participate. This piece from the MIT Technology Review covers some of the more fascinating examples of what it’s like to work as an actor in a deepfake project.

For projects that mimicked Richard Nixon or Vladimir Putin, it helped to find actors with similar facial features and mannerisms, so that their movements would translate more easily onto the target’s face. The resulting clips of a fake Nixon reading “In event of moon disaster” are remarkable and disturbing, the result of an actor who spent hours getting the subtleties of Nixon’s cadence just right.

Some projects are hindered by a lack of source material, as in a deepfake of Kim Jong Un. In this case an actor who looked less like Kim but was more expressive added personality and detail to what otherwise would have been a flat recreation.

But perhaps most surprising to us in this review was the use of actors to act as cover for victims of political violence or persecution. For the HBO documentary Welcome to Chechnya actors were cast who had similar facial structure and ethnicity as LBGTQ dissidents who were interviewed. Rather than appearing on-screen as themselves, risking violent backlash and persecution, these activists were “played” by actors whose faces were superimposed digitally over the activists’, allowing the words and emotion to come through while remaining anonymous. Deepfakes (as we’ve reported on several times) could be a growing danger in our climate of misinformation, but as this use reminds us, technology is rarely evil or good outside of a specific application.

Inside the strange new world of being a deepfake actor

There’s an art to being a performer whose face will never be seen.

www.technologyreview.comShare

6: Voiceprints for illness

When there’s so much uncertainty around diagnosing and treating COVID-19, it was perhaps inevitable that home technology would step into the void. Much like dogs that seem to be able to smell COVID-19 infections, researchers are hoping to use people’s voice prints to diagnose coronavirus, as well as dementia, depression, and many other ailments.

Much like the disease-sniffing dog, should these systems be able to accurately identify illnesses, they will be unable to explain what it is they’re hearing and how it correlates to infection. Human speech is complex and deeply detailed, and detection systems will require training data from thousands of infected and uninfected people for each targeted ailment. We therefore would hope that, even if medicine does embrace Alexa’s initial diagnosis, that clinical examination would follow before any interventions were prescribed. Given that Alexa is the product of the world’s largest store for everything, it would be far too simple to link these diagnoses to suggestions for a person’s shopping.

Alexa, do I have COVID-19?

Researchers are exploring ways to use people’s voices to diagnose coronavirus infections, dementia, depression and much more.

www.nature.comShare

One bizarre real estate market

Danielle Baskin recently tweeted this wonderfully bonkers project where she trained a GAN to generate apartment listings and the results are hilarious. These apartments come with an array of unique features, including beds with built-in stairs, a walk-in dishwasher, and “deep wooden floors”. The full thread includes more details as well as an actual inquiry from a respondent on Craigslist.

Danielle Baskin

@djbaskin

In my dream I met a landlord who used GANs (Generative Adversarial Networks) to create Craigslist apartment listings, and I was inspired to try it too! This is my first GAN rental unit and I wonder who will reply. https://t.co/ux6wtD0RAs

4:57 PM - 5 Oct 2020

By Ethical Futures Lab

Six Signals: Emerging futures, 6 links at a time.

Tweet Share

If you don't want these updates anymore, please unsubscribe here.

If you were forwarded this newsletter and you like it, you can subscribe here.

Powered by Revue