It's really effing obvious LLMs are a con trick:
If LLMs were actually intelligent, they would be able to just learn from each other and would get better all the time. But what actually happens if LLMs only learn from each other is their models collapse and they start spouting gibberish.
LLMs depend entirely on copying what humans write because they have no ability to create anything themselves. That's why they collapse when you remove their access to humans.
There is no intelligence in LLMs, it's just repackaging what humans have written without their permission. It's stolen human labour.
@FediThing playing Devil's Advocate, couldn't you say the same for a human just beginning to talk?
@chrisamoody @FediThing No. The human just beginning to talk is having such original thoughts, they *often* have to invent words.
@chrisamoody @FediThing There are whole sm pages devoted to baby invented words, and people find them charming. Not just their own babies'. Because they are so often not gibberish, but exactly perfect encapsulations of an idea.