# A more successful penny-farthing: *the social construction of 'generative AI'*
*There's a video version of this [essay](https://www.youtube.com/watch?v=4TsOD7x6rSQ), with visual aids.*
> [!BLUF]
> ***'AI' chatbots are to LLM systems what the penny-farthing is to bicycles.***
*Hear me out.*
You've heard the story in many a form before: some technologies, just by the fact of their invention in the first place, will simply *change the* (social) *world*. The stirrup, arriving in Europe by the 8th century, allows someone to wield a sword on horseback, [medieval feudalism ensues](https://en.wikipedia.org/wiki/Great_Stirrup_Controversy). The [composite bow](https://en.wikipedia.org/wiki/Composite_bow) gets the same draw force as a 6 foot solid oak English longbow, but in half the size, by layering materials elastic in compression (horn on the belly of the bow) with some elastic in extension (sinew on the back of the bow); you can now shoot a bow on horseback; one thing leads to another, and Genghis Khan passes his Y chromosome to 8 % of males from the Korean peninsula to the Black Sea ([disputed](https://en.wikipedia.org/wiki/Genetic_descent_from_Genghis_Khan)). Closer to us, the bicycle lets Victorian women access mobility, since horse-riding would jostle their fragile insides, and risk *sinful friction*. Let that brew for a couple of decades, BLAMMO, you got suffragettes, then[ the whole of modern Feminism](https://en.wikipedia.org/wiki/Bicycling_and_feminism).
# Technological Determinism
This line of thinking, this way of looking at key technologies as causing tremendous social change downstream, is called [*technological determinism*](https://en.wikipedia.org/wiki/Technological_determinism). It makes for neat stories, like the ones outlined above; stories that present the status quo as inevitable once the technological artefact exists. History rarely has control groups, so this causal relationship is only a hypothesis, as all historical relationships are. Yet this hypothesis is seductive, because it works backwards from the present - it does not consider counterfactuals: what alternative social movements the same technology could have enabled (say, an equilibrium of power between warring kingdoms in Asia), nor what alternative technological artefacts (say, the penny-farthing) could have been adopted - you thought teens on Lime bikes in London are bad, at least they're not riding a 5 feet tall machine.
# The Social Construction of Technology
In the 1980s, an alternative way of looking at the reciprocal interaction of social and technological developments arose: the [Social Construction Of Technology](https://sciencepolicy.colorado.edu/students/envs_5110/bijker2.pdf) (SCOT) (which I researched for [[Social Theory and its use in three studies on 'generative AI' in higher education.|this piece]]). It, too, is an theoretical perspective, a bundle of unfalsifiable hypotheses that cannot be shown to be correct - again that's History for you, it doesn't do controlled trials. Just like there are insights to be gained from looking at history through the lens of technological determinism, SCOT offers another lens for enquiry. In this area, it's less about an exclusive claim to truth as it is practical utility - being useful in making sense of things.
SCOT is to technology what EPOR, the [Empirical Programme Of Relativism](https://journals.sagepub.com/doi/10.1177/030631278101100101), had been to science, when it emerged in the previous decade. Coming, itself, a decade after Thomas Kuhn's *Structure of Scientific Revolutions*, EPOR was an initiative in the sociology of science seeking to empirically demonstrate that even the hardest of natural sciences (eg. astrophysics) had an element of *epistemic relativism* - that scientific truth was, ultimately, *socially constructed*. Those ideas were the late-twentieth century manifestations, as regards science, then technology, of epistemologies arising at the turn of century, in broader sociology: moving past narrow positivism, toward social constructivism. Social truths are not absolute, platonic truths merely *applying* to the social world, they are instead negotiated consensuses *emerging* from social groups: in other words, socio-cultural constructs.
# SCOTting the bicycle
The bicycle, with all its wonderfully weird early iterations, was the first history used to illustrate the role social groups play in the development of a technological artefact. [Wiebe Bijker](https://en.wikipedia.org/wiki/Wiebe_Bijker) (b. 1951), a Dutch sociologist of science and technology (trained as an engineer) introduced SCOT, making the case by retracing the history of the bicycle. Yes, once there has been *closure* onto a particular implementation of the artefact, social change ensued, but, prior to this, new technology is in a state of *interpretive flexibility*, in which the problems it solves, and the type of solution it offers, are up for grabs, and have to be negotiated through the interplay of social forces. Are you starting to see the 'AI' angle yet?
Bijker would go on to illustrate SCOT with [other examples](https://mitpress.mit.edu/9780262522274/of-bicycles-bakelites-and-bulbs): the lightbulb, or Bakelite - but here I'm committed to a bicycle bit. Before drive-chain transmission, pedals were on the front wheel. This was a start to the solution of the broader mobility problem the bike was trying to solve, but brought another problem, that of speed, or lack thereof. Without gears, the only way to roll a longer distance for the same turn of the pedals was to make the wheel larger. And larger. And larger. Hence the penny-farthing.
# Penny-farthing 2: Penny Farther
It looks absolutely ludicrous, in hindsight, that such a design could even have been thought to be a viable implementation of the bicycle. Yes, there is higher speed, but at the price of safety - not to mention an unwieldy form factor. The sight of a moustachioed London hipster, transporting their penny-farthing on the Tube, is a reminder that whilst technology design always involves trade-offs between form and function, it takes a special type of man - because it's almost always men - to deliberately choose a (considerably expensive) design that *fails at both.*
And yet, when it came out in the 1870s, the [penny-farthing](https://en.wikipedia.org/wiki/Penny-farthing) was the incredibly popul[en.wikipedia.org/wiki/Safety_bicycle](https://en.wikipedia.org/wiki/Safety_bicycle)ar. The best incarnation of the bicycle, promising speeds never achieved before - so much so one would need a chin-strap for their top-hat. A symbol of freedom, it birthed cycling as a sport. The additional danger brought a *frisson* to young dandies, and the penny-farthing was an acceptable solution to this particular *relevant social group*. But not so to, say, older people, or women. Designs using drive-chains, *nearly twenty years later*, would solve the speed problem with a better safety profile - it was even called the [*safety bicycle*](https://en.wikipedia.org/wiki/Safety_bicycle). But even lower to the ground, that speed didn't help the problem of vibrations from the road surface: a problem of comfort more than safety.
This problem was to be solved by [air-filled tyres](https://en.wikipedia.org/wiki/Bicycle_tire#History), also in the late 1880s, but they weren't much loved, on aesthetic grounds: their offering a solution to vibrations wasn't deemed worth the fugly look, particularly for the dandy crowd. But there's a limit to the speed that can be reached, in practice, on a wobbly bike. When the social groups who foremost cared about speed noticed (in sports cycling) air tyres increasing speed, there was a social *redefinition of the problem* solved by those tyres, from one of comfort to one of speed.
# Closure (not the kind where you text your ex)
This is what SCOT calls *closure*: an end, or certainly a reduction to *interpretive flexibility*, towards a common interpretation of the technology as a particular artefact, or set thereof. Different *user groups* will have different *technological frames*: a set of values, ideas and behaviours through which each have their understanding. From its inception, for some, the bike is for transportation, for others it is sporting equipment. Some care about comfort, others about speed, etc Closure is the collapse of this diversity of frames into, if not a single one, a smaller set of inter-compatible frames shared by larger sets of people, eventually entire societies.
Of course, like all things, this closure is provisional - *derailleurs,* electric assistance, tandems, recumbent bikes, each innovation around the technology re-opens interpretive flexibility, and calls for another round of social construction - of collective negotiation between different groups: users, manufacturers, cultural commentators, regulators (if applicable) etc. *Technology doesn't just happen.* When a technological artefact comes into your hands, that's the result of a series of decisions made by real people, based on actual motives, collective incentives, personal hopes and fears. *The bit of tech that changes society has been shaped by society in the first place.*
[Which is really, really important.](https://youtu.be/6ROlMFlbkWE?t=123)
-"Cool story, Berard, but what does it have to do with ChatGPT?"
*Hear me out further.*
# Interpretive flexibility of LLM technology
The software techniques that underpin AI chatbots have been around for more a decade - they themselves use deep learning with artificial neural networks modelled as linear algebra, which are [even older techniques](https://en.wikipedia.org/wiki/Perceptrons_(book)), dating back to the work of [Marvin Minsky](https://en.wikipedia.org/wiki/Marvin_Minsky) in the 1960s. The release of ChatGPT, was a 'low-key research preview', but it ended up so high-key we have forgotten it was meant to be a preview of *research*. This is experimental technology - evidently as regard user safety.
Vector embedding appears in the 2010s, and sees a number of useful applications for natural language processing. The transformer model is from 2017. People were building useful language "AI" on top of OpenAI's very own GPT2 and GPT3, for translation, writing assistance (both fiction and non-), and the kind of chatbot useful for customer service, who would take unstructured input in natural language - asking something into a text box - to return entries in a knowledge base, or perform basic triage before escalating to a human.
# Don't take no backchat from the machine
Note that none of those application *'talk' back at you*. AI that talks back is dangerous: because it's trained to maximise how human it seems, it leads to stories like that of [Blake Lemoine](https://www.criticalopalescence.com/p/is-blake-lemoine-really-all-that).A Google engineer working on their LLM tuned for '[dialogue applications](https://en.wikipedia.org/wiki/LaMDA)', he started claiming, in the summer of 2022, that the chatbot was sentient (and wanted the world to know), announcing it first within the company, then to the world, which got him fired. You would think this would have acted as a cautionary tale beyond Google, who clearly had ChatGPT-like chatbot in the works, but was being cautious in their deployment. Then OpenAI jumped the starting gun, and everyone had no choice to follow, because user safety is only of concern to shareholders if it is proxied by profit.
There are many wonderful applications to LLMs. Just the ability to issue, in English, relatively complex instructions to find information or organise documents - that's a killer feature to any operating system or cloud storage platform. It could have been thus. But chatbots play on humans' natural tendency to anthropomorphise, or otherwise ascribe agency, to [inanimate objects](https://en.wikipedia.org/wiki/Pareidolia). AI chatbots are *a more successful penny-farthing* - equally dangerous, slightly ridiculous if you take a step back and think about it critically for a minute, but so alluring it is closing down our interpretive flexibility, with the chatbot assistant, and more generally the dialogic interface, becoming the expected norm for generative AI.
# The implication(s)
When you use *a chatgpt* (used here as a common noun, a [*generic trademark*](https://en.wikipedia.org/wiki/Generic_trademark) as* in ["go see a star war"](https://www.youtube.com/watch?v=L0T3XPfQgNI) ) to summarise a document, or rephrase an email, there is no technical reason for this to happen as a dialogue with a system speaking in the first person. The fact the bot is presented as a general anthropomorphic assistant blurs the line between use cases LLMs do well (rewording the email to your boss) and those they do less well (counselling and emotional support regarding your relationship to your boss). It is also social (and, in the past two years, political) forces that means you find yourself using as therapist a system whose understanding of therapy comes from fiction (web-scraped training data hardly contains actual therapy transcripts), when there are [specialised systems](https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits)developed just for this purpose, that have passed actual [clinical controlled trials.](https://ai.nejm.org/doi/full/10.1056/AIoa2400802)
Gigantic multimodal 'frontier' models promised economies of scale they failed to deliver, language models being both more effective and more efficient when trained for specialised applications - not to mention the training data's size is more manageable, its contents easier to curate. But having one bot for everything suits the ambitions of the owners of the big models ([Varoquaux, Luccioni and Whittaker, 2024](https://arxiv.org/abs/2409.14160)). You're using them for all those applications because they're already in an app on your phone, or an open tab in your browser. That's because in spite of the [contradictions](https://dl.acm.org/doi/10.1145/3705294) and [dangers of the design](https://www.papsychotherapy.org/blog/when-the-chatbot-becomes-the-crisis-understanding-ai-induced-psychosis) becoming more and more apparent, the tremendous success of *the* ChatGPT has gotten penny-farthings obscene amounts of investment. AI companies running foundation models have a vested interest in your using them for *everything*. I'd say financial interest, if they were not currently [increasing their losses](https://www.wheresyoured.at/wheres-the-money) with each additional query. The financial interest, reflected in the - equally obscene - valuation of those companies, is predicated on the possibility, in the future, of monetising our dependency on it, the information it has about us, not just from metrics of use, but from text *we will have willingly input on it.* (yes, I have noticed the switch in pronouns too) Once comes time to cash in, *a chatgpt* will have years worth of personal information on each regular user, to be monetised in [ruthless](https://erinkissane.com/meta-in-myanmar-full-series) and [careless](https://en.wikipedia.org/wiki/Careless_People) ways, if the platforms that have embedded themselves into our daily lives are anything to go by.
# Final thoughts
It took nearly twenty years for social forces to push the design, then adoption, of features making the bicycle safe, and useful. The interests of manufacturers aligned with a safe and useful bicycle, which would sell more; the only incentive for AI companies is to maximise usage, that is, users attention, time, and data input. In the excitement of ChatGPT's launch, we are settling towards the least safe, and most ecologically damaging LLM artefact - leaving aside its actual utility, about which many are [cooling their ardour](https://www.theregister.com/2025/07/09/csuite_sours_on_ai). We should have a variety of bikes for a variety of uses: road, cross-country, folding, electric, rental... Instead we are getting plenty of manufacturers pushing on us a universal penny-farthing, to be used for everything.
*Note: This essay contains no synthetic text. Hopefully obvious, but it does bear making explicit.*