Unasked questions & unsung histories
In this week’s issue, we cover a lot of ground, touching on new uses for blockchain technology, new challenges for Clubhouse, and new exploits to obscure vital information on social platforms. Read on to the end to learn something shocking about icebergs.
—Alexis & Matt
OK, let’s talk about art on the blockchain
There has been a lot of discussion about crypto art in the past few months, largely because of the growing interest in NFTs. Like all discussions around blockchain, this one may require a bit of a primer before we dive in. So, what’s an NFT? It stands for “non-fungible tokens” — basically, a cryptographic token that is unique. So, while cryptocurrencies are fungible — any bitcoin is interchangeable for any other bitcoin — NFTs represent a particular item. For this reason, they’ve become interesting in the art world, since an NFT can be a cryptographic representation of an image, a piece of text, or any other creative work.
Tools and platforms have begun to emerge around NFTs, and in a recent newsletter issue, Robin Sloan recounts his experience experimenting with Zora, a platform that bills itself as “a way for creators to publish creative media, earn money on their work, and have others build and share what they create.” Zora’s client library allow you to “mint” an NFT, which establishes your ownership of the artwork that it refers to. The user-facing platform also has a web-based marketplace where you can buy and sell those works.
Sloan points out some interesting capabilities of this ecosystem: First, every object maintains a ledger that creates a perfect chain of provenance, something that’s never been possible before in the art world. And second, when an object is minted, you can create a built-in profit sharing agreement. Which means that as an object is bought and sold, the original creator can get a certain percentage of every sale, creating an automatic royalty system that, again, hasn’t been feasible before. Sloan’s analysis is excellent and goes into deeper detail on these ideas, plus the potential need for centralized platforms, the human psychology around artificial scarcity, and the environmental impact that needs to be considered in any crypto discussion (the CO2 usage for Sloan to mint NFT for one object was equivalent to burning a barrel of oil).
→ A coat check ticket, a magic spell | Robin Sloan
From search engines to curiosity engines
We thoroughly enjoyed this essay by Megan Marz that takes a deep dive into how search engines have reframed the way we seek understanding. She argues that there is value in the elusiveness of knowledge, that not-knowing leads to the kind of curiosity that has historically resulted in great works of philosophy and art. Google, on the other hand, is “designed to reduce the world to the answers it can provide”, thereby “flattening the social into the statistical”. Marz posits that as we increasingly interact with the world through this framing device, it changes the way that we think, feel, and engage with ideas:
“The first page of my search results for ‘I am sad’ includes: ‘7 Things to Do When You Are Really Sad’; ‘6 Powerful Happiness Tips’; ‘5 Ways to Feel Happy.’ Google can’t solve the problem of being sad, but it can reconfigure the problem through a different logic, so that “sadness” seems less like an existential concern and more like a DIY home repair query. It can encourage you to forsake the ungoogleable for the googleable. Your sadness isn’t gone, but you have an answer — or at least you know there is an answer — which feels good.”
Can we imagine a different approach? What would a technology look like if it were designed to help you explore a question rather than find an answer? How might we build platforms that deepen curiosity rather than immediately quenching it? The Google model for engaging with knowledge has become a default, an implicit choice, but there are many other models we might build that could be more inspiring, engaging, and creative.
→ Easy Answers | Real Life
The forgotten history of Blackness on the internet
This Marketplace interview with Charlton McIlwain, an NYU professor of media and culture, is both fascinating and sadly unsurprising. McIlwain discusses decades of Black culture online, including early services like AfroNet and NetNoir, which have largely been overlooked in histories of the internet. The history of exploitation and appropriation on the internet is long and myriad, from these early examples all the way up to Fortnite profiting from dances copied from people of color on TikTok. As McIlwain points out, much of the conversation around Black people and the internet tends to be framed in terms of “deficit” — lack of access, etc. — instead of surfacing the ways in which Black communities have been central to the internet’s evolution at every stage.
On a related note, if you haven’t read about Jerry Lawson, the inventor of the game cartridge and the “Father of Modern Gaming”, take a look at his Wikipedia entry and this IGN profile from 2019.
→ How the history of Blackness on the internet was erased | Marketplace
All the problems Clubhouse is about to have
We’ve been at this newsletter for a little while now, and we’ve seen some patterns emerge in how platforms often fail to protect themselves and their users from obvious threats. Perhaps no one at Clubhouse is a subscriber? We can see some problems coming from a mile away, and we hope they’re prioritizing solutions.
First, it’s got some serious privacy issues: Clubhouse encourages you to connect the app with your Contacts list so that it can notify you when friends join, or conversely, get your already-participating friends to welcome you into the club. While lots of platforms do this — LinkedIn, Signal, and many more — Clubhouse takes this two steps further. Accounts are based on phone numbers, and Clubhouse won’t let you invite someone unless you grant access to your contact list and the numbers it contains. Even if you decide not to grant access, Clubhouse will use the data it has about you from other users’ contact lists to recommend people you should follow, and who should follow you; in effect, your friends can violate your privacy and share parts of your social network with the app.
Second, it’s pretty terrible on security: it transmits user id’s (not handles, but numeric identifiers) in clear text, making it trivial to know who is talking to or listening to whom. This “metadata” is what had Americans so upset about the NSA’s data collection practices in the early 2000’s: even if you don’t have the content of the conversation, you have information that connects people who may not want that association public. In terms of Clubhouse, though, you can get the content of the conversation as well. Raw audio can be retrieved using a technique described by researchers at Stanford; a proof-of-concept site went up just this past Sunday, but we won’t link to it here for obvious reasons.
Finally, it faces a real uphill battle for moderation. Audio is difficult to moderate at scale just using language processing. Even if a clip can be translated into plain text, these conversions aren’t perfect and can miss nuances in tone. Both false positives and false negatives could be damaging in this case. In lieu of automated content moderation, Clubhouse has instituted both “community moderation” like Reddit, and a “ban” feature which keeps a banned user from listening in to rooms where you are a host or contributor. This feature has broad, and perhaps unintended, side effects: Marc Andreessen has banned many reporters from his rooms, meaning that no matter who else may be participating in a conversation with him, that conversation is off-limits to many members of the media. This issue gained attention recently when Andreessen hosted a conversation with Elon Musk, and some reporters who cover Musk weren’t able to listen in.
Building a platform like Clubhouse is difficult, and achieving such widespread adoption and buzz so quickly is impressive. Before it grows much larger, however, its founders and leaders need to contend with these foundational concerns.
→ Clubhouse, a tiny audio chat app, breaks through | The New York Times
‘Foreseeable misuse’ and the responsibility of makers
You may have seen videos circulating last week showing cops playing copyrighted music while citizens are filming them, ostensibly in an effort to avoid having that footage shared on social media. When those videos are shared on platforms like YouTube or Instagram, they trigger the platforms’ rules against posting copyrighted material and removed, thereby making the footage incredibly difficult to share.
This is a fascinating example of emergent behavior, using the affordances (or, in this case, constraints) of a system in ways that weren’t originally intended. In this newsletter, we often share emergent behaviors that are delightful or expansive, whereas this one is clearly troubling. In the context of designing and making ethically, the key question is: Who should be responsible for emergent use cases, especially harmful ones? Should the designers of Instagram’s copyright infringement system have foreseen it being abused in this way, or is this a matter for regulation and punishment after the fact?
In a recent piece, Sheri Byrne-Haber talks about the legal “doctrine of foreseeable misuse” and how it might be applied to software development. Basically, law around product liability can hold manufacturers liable for injuries caused by a product, even when the consumer misused the product. When applied to software design and development, this would require us to engage in the kinds of practices we talk about a lot here at EFL — ideating on potential misuse and abuse in the course of product development, in order to proactively protect against undesirable, unintended consequences.
→ Is This Beverly Hills Cop Playing Sublime’s ‘Santeria’ to Avoid Being Live-Streamed? | VICE
Charging your phone with your skin
For years now, Matt has been somewhat obsessed with technologies that recapture waste energy and turn it into a more useful form. Researchers have been hunting for ways to turn body heat into electricity for some time - we came across this Wake Forest University project way back in 2012 - but now a new prototype from CU Boulder may be closer to reality than any prior project.
The unique innovation in this case is the flexibility of the material. Wearing a bracelet or ring could generate one volt per square centimeter of skin in contact; not enough to charge a laptop, but enough for a fitness tracker, smart watch, or other wearable. While watches have been powered by the motion of our wrists for decades, we may not be far away from our wearables never needing to charge, so long as we keep them on our bodies.
→ New wearable device turns the body into a battery | Tech Explore
One accurate thing: how to draw an iceberg
On Thursday Feb 18th, glaciologist Megan Thompson-Munson tweeted a sketch of how icebergs truly float, and by Sunday, Josh Tauberer put up this simple tool: draw any shape and see its stable orientation as an iceberg. Here’s hoping for more stock images that accurately reflect physics!
→ Iceberger | Josh Tauberer