Algorithmic Control

It is way too hard—outright impossible, one might say—to maintain a high-quality, inspiring discovery feed on social media. Every day I walk on the edge of a blade, carefully calibrating my feed to prevent it from degrading into a time-sucking blackhole.

The algorithmic feed, a delicate stream of static and moving images so susceptible to every user action. I imagine: when I click into the detail of a photo, linger seconds longer on a video, or bookmark a post for later, all my footprints are fed into the mysterious algorithm which then mercifully spits out more content alike and shuffles my feed in an instant.

However, these are momentary decisions that reflect only my fleeting desires that may not align with my long-term goals. As a result, algorithms always prioritize what harvests my attention in the moment over what I’d like to see in general. Or put a mile better by Annika: “Social media privilege our impulses over our intentions.”

What frets me more is that not only do I have little knowledge of what action weighs into the system, I also have no control over it. Ezra Klein shared the same sentiment:

So what I’d like to see on Threads, and really any network, is tools that give me more control. I want to tell the algorithm what I want and then be able to tweak those preferences, not have it learn what it thinks I want from what I do.

I want to tune my algorithm, with an overarching way to etch my preferences on it. I want to have some sort of guardrail for my feed so that I don’t need to laboriously hand-train it by giving feedbacks to individual pieces of content. I want to tell the algorithm to stop showing me beautiful ladies, innocent pets, or plain stupid memes. Instead, show me latte arts! Interior designs! Graphics! Exquisite objects! New typefaces! Motions! Furnitures!

LLMs can be up to this task, providing a simple interface to the otherwise black box algorithm. Users can use natural language to “tell” the algorithm what they want, to show more or less one type of content, or never show something again. Even better, it might help construct one-off graphical user interface to meet needs like Ezra’s:

For instance, I’d like a virality slider where I can set my preference to seeing less viral posts, and register a desire not to see the most viral posts. Or I’d like to be able to choose to see more posts with links and fewer without. Or…

I know I am jumping too early into solutions. Ultimately the difficult problem here is not the technical but the business incentive. If companies are still monetizing on epistemic inequality and optimizing for screen time, I can’t see why they would give us more transparency and control. For the time being, I will stay away from discovery feeds.