People worry about AI gaining control over humanity, its potential to supercharge misinformation and how it might help to perpetuate insidious bias and inequality. Some are trying to create safeguards to prevent this. But communities are also trying to create AI that helps where it should, and maybe that will include as a writing aid. But, as my experience shows, ChatGPT corrupted the whole process simply by its existential presence in the world. I was at once annoyed at being mistaken for a chatbot and horrified that reviewers and editors were so blasé about the idea that someone had submitted AI-generated text.
We need to be able to call out fraud and misconduct in science. In my view, the costs to people who call out data fraud are too high, and the consequences for committing fraud are too low. But I worry about a world in which a reviewer can casually level an accusation of fraud, and the editors and journal editor simply shuffle along the review and invite a resubmission.