Bixonimania didn’t exist before 15 March 2024, when two blog posts about it appeared on the website Medium. Then, on 26 April and 6 May that year, two preprints about the condition popped up on the academic social network SciProfiles (see https://doi.org/qzm5 and https://doi.org/qzm4). The lead author was a phoney researcher named Lazljiv Izgubljenovic, whose photograph was created with AI.
Osmanovic Thunström says the idea to invent Izgubljenovic and bixonimania came out of studies on how large language models work. When she teaches her students how AI systems formulate their ‘knowledge’, she shows them how the Common Crawl database, a giant trawl of the Internet’s contents, informs their outputs. She also shows students how prompt injection — giving an AI chatbot a prompt that shunts it outside of its safety guard rails — can manipulate the output.
The bixonimania experiment is a fresh spin on a bigger issue — the poisoning of AI systems by people who manipulate the academic literature. Elisabeth Bik, a microbiologist and research-integrity sleuth, notes that researchers have created fake books and papers to inflate their citation counts on Google Scholar — thereby exploiting the same automated indexing systems that feed into LLM training data.





















