The Average Consumer


The average consumer is no more the aggregate of all consumers; but it is literally the average, where the minimal, maximal, and median values are all the same. The pipeline has exhausted itself.

The intellectual experience is lived without inscribing an experience, merely satisfying a human body which has biologically evolved to be dopamine-dependent. Their interaction with AI, educated users notwithstanding, will be no less sophisticated than watching French art movies on a streaming platform’s mobile app. Instead of starting an inquiry, the user merely submits a query. This act too is already redundant enough, and will be automated once the user gives app permissions to their Neuralink chip to integrate with an LLM of their choice. Where entertainment has been reduced to algorithmically sanctioned tweets and shorts, so has intellectual activity become no more than submitting an LLM some tokens and receiving an algorithmically sanctioned answer whose structures can be traced back to model weights, biases and training methods.

Human communication comes with an implicit subjectivity, such that imperfection and untruth is accepted by both parties, at least usually. However, “model hallucination” is regarded as merely a growing pain until AGI “just fixes it”. It is only a matter of time until hallucinations, a concept which already discounts any difference between mere error and a truth-deficient training alignment, is considered taboo. We’ll know the time once the influencers broadcast the activation signals handed over to them. There is no need to fear the age of bots, the so-called dead internet theory. The threat isn’t that we won’t be able to distinguish AI-generated content from “organic” content. It’s that we will discount human answers as AI-generated, something which we are already prone to do based on the em-dash, because they can only resemble what the algorithmic average allows it to resemble.

Having transcended beyond our tools of analytical inquiry in the current post-scarcity period, we now possess the means to create synthetic products, the technical specifications of which match directly with its idea: A machine calibrated to produce an output that will maximize its user’s satisfaction, regardless of its input. Despite our reflex to append “please and thank you” at the end of most sentences, LLMs are literally defined as the aforementioned machine.

The surplus value of any activity is the derivative gains earned by merely consuming energy at the right place, right time. One becomes a better athlete by training, and learns more about discipline and dedication in the process. One writes an article that is shoddy, and obtains one article as promised, but some lessons learned as well. This surplus becomes smaller as dependency on AI increases, or rather, as the frequency of the exchange rate of information increases. We are prone to glance over this fact, because this lets us speak with a model for about twenty minutes and proceed to decide that we are engaging in “vibe-physics”. Suddenly all our personal caprices and envies were vindicated by a soothing reply. The game has turned on itself. Whereas the challenge was to one-shot a model so well as to achieve a maximal result, the model is racing towards pulling its user into a false reality. It is no mere UX achievement that people in masses have obsessed over their “relationship” with 4o. Only a disaster of that scale could shadow one of the lowest-energy model launches ever.