Commenting on HEPI's latest AI student survey, Josh Freeman writes that "AI is here to stay. Universities should take note". The large language model (LLM) technology behind the current AI wave will certainly find many uses in higher education, and knowledge work at large. We should not, however, assume AI will stay *in its current form*: the widely available, general-purpose assistant chatbots of a handful of multinational corporations. The note to take is that Universities can influence *what kind of AI* stays. Before ChatGPT, ‘AI in Education’ meant specialised systems, [sometimes even developed with educators](https://doi.org/10.1186/s41239-019-0171-0). These followed a clear pipeline of education-informed development and validation, before eventual deployment; what deployment there was was cautious, and incremental. Today, however, AI means more often than not a general-purpose multimodal chatbot. This is a powerful, alluring - even addictive - technology, but it represents a specific implementation of the LLM, one that is narrow and potentially harmful. The idea that AI is inevitable is unhelpful [technological determinism](https://en.wikipedia.org/wiki/Technological_determinism "null"). A better lens is Bijker's [social construction of technology](https://sciencepolicy.colorado.edu/students/envs_5110/bijker2.pdf "null"), for which new technologies go through a phase of *‘interpretive flexibility’* where their final form is up for grabs. Bijker's classic example is the bicycle. *For nearly twenty years*, ‘bicycle’ meant the penny-farthing - certainly double-cycled, but hardly what pops in your mind when you hear the word 'bicycle'. Penny-farthings were fast, birthing sport cycling; they were also dangerous and impractical. Yet, they were successful with one ‘relevant social group’ (young, thrill-seeking men) which sustained a bad design for two decades. It was only with other groups (women, older people) demanding a safer alternative that the modern ‘safety bicycle’, with a chain-drive, then air-filled tyres, emerged; eventually leading to a consensus, or ‘*closure*’, on what a bicycle should be. Universities, traditionally slow-moving institutions, must not rush to adopt the penny-farthing of AI - thrill-giving, but impractical and dangerous. Let alone strike, with the tech giants that control it, bargains it would be a cliché to call Faustian. Over the past two decades, universities have seen a steady erosion of their autonomy through dependence on concentrated technological power, from cloud infrastructure to a model of academic publishing we cannot call a monopoly only because there are four of them. This has been a slow process of *de facto* institutional capture, but it is painfully visible to someone in my situation, returning to graduate school after a MSc in 2006, having worked as an engineer in web technologies the interim. Being locked into a digital platform for information storage/exchange is one thing. It is another entirely when that platform is an AI shaping the very production and transmission of *knowledge*. AI does not ‘know’ things as we do. Humans know about the world; LLMs knows about the text that purports to describe the world. Their apparent understanding comes from their abstracting statistical patterns in vast corpuses of text; this knowledge has no reference to *ground truths*. Their output is a form of sophisticated mimicry, unconcerned with truth or falsity, merely statistical likelihood given the context. This makes them, in the philosophical sense defined by Harry Frankfurt, engines of [bullshit](https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit "null"): language designed to persuade without any regard for the truth. Nevertheless, these systems are excellent at solving problems whose solutions can be presented as text, which includes many encountered in teaching and learning. Ignoring AI is not an option, but forgetting the limits of its design shouldn't be one either. When institutions integrate these systems uncritically, they risk outsourcing the core function of a university. Incentive structures in the neo-liberal model of higher education have already led universities to surrender their infrastructural autonomy. There is now enormous pressure to do the same for *epistemic sovereignty*: the power to access, control and validate the knowledge that defines a university. Partnering with AI companies seems to solve many problems: what AI skills to teach, how to protect assessment, and how to give researchers low-cost access to powerful tools. But all of this will be mediated by an opaque system owned and operated by a foreign commercial entity. Since training an LLM from scratch is too costly, universities could instead collaboratively fine-tune open-weight models with transparent data, and deploy them them as tools for specific educational purposes. This is a big ask for institutions facing financial and regulatory pressure, so another hope lies with independent, transparent EdTech providers. Any alternative to Big Tech seems prohibitively expensive, but only because the unprecedented, titanic flow of venture capital into AI allows products to be offered at low or no cost. The bubble will pop, but some ideas in tech are too big to fail. Twitter, for instance, took more than a decade to post a profit, and those two profitable years have not come close to covering the losses accrued before or since. A major AI company can operate at a loss for decades to embed itself as critical infrastructure, only monetising its user base later. Surely we have learnt something from 'social' media? The UK higher education sector needs to proceed with extreme caution. The conversation around AI must move beyond a simple discussion of ‘challenges’ to a frank assessment of the threats to privacy, data sovereignty, and the integrity of knowledge, looking for home-grown (national/institutional) alternatives. A wise policy approach would be for the sector to collectively invest in and support the development of open-source, transparent AI models tailored for education. This is the only way to ensure the future of our knowledge is not determined by a handful of companies whose primary incentive is not truth, but user engagement.