it’s being used to protect us? Now that is thoughtful:
Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.
Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.
In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”
Apparently, visual artists have been attempting to poison the models for a while now, to the point where they can’t tell the difference between a cat and a cow. Turns out even in Plato’s Cave you need people who know things.
But using itself to replicate itself is, shall we say, projecting deformity.
Hapsburg AI, indeed.
Image: Based on research by Ilia Shumailov and others.