We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
So much ado for the modern equivalent of burning down a library.
Which probably also already happened to train it in the first place. Article is about Anthropic, but xAI would do the same if they could.
https://arstechnica.com/ai/2025/06/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models/
I guess that’s right.
Just that what I meant by “burning down a library” was the loss of information that would occur as a result.
Referring to historical events of burning down libraries, of which I can recall 2 of. One being the Library of Alexandria and the other being Nalanda Mahavihara.
In the modern day, where stuff is digitised and mass printed, burning books is not really going to cost loss of said information (much less, scanning it, which would cause nothing except potential loss of revenue to the publisher).
On the other hand, if you fill the internet with a bunch of false information, such that someone just looking something up, will see the false information way before the actual thing, that would have a similar (even worse) effect on the civilisation.