Patrikios while I occasionally use Grok and ChatGPT, these artilcles I have been working on have all been Claude. I "suspect" but cannot confirm that the reason I am finding Claude's output so useful is that I have now spent several months pointing it to, asking about, and uploading many of our past articles, plus much material from Dewitt and Sedley and others which i think are the most perceptive. I am combining that with a lengthy series of instructions about which perspectives on issues seem to me to make the most sense. For example, just feeding it the list of 15 principles on the front page here, plus some number of academic articles with which I agree, etc. is what is prompting it to produce such good drafts. I definitely have to read each word, however, because even against my strict instructions it will still fall back into views that I consider to be wide-accepted but wrong.
For example I am sure that it would be completely happy to produce an article saying that Epicurus merits little more than a footnote in history and that the real genius was Democritus, and than goodness we have Aristotle so Ayn Rand could base her arguments on him and we could all be Objectivists. It appears to me that Claude or any other AI engine is going to give you what you ask for - within limits - especially if there is a body of work out there which agrees with that opinion.
So while there are obvious very great dangers with AI, I don't think in the end that it can ever replace a strong human editor who has and end-goal in mind. No doubt the AI programmers code their own preferences into the system, but AI doesn't "care" about what it is producing unless it violates one of those hard-coded rules.
We as the authors taking responsibility for the output have to guarantee that it is worthwhile. AI doesn't care about us, and I continue to agree with those who criticize AI that in the end - I don't care what AI's opinion is either. It's a tool but it is no substitute for a human direction.
Getting back to your initial question, I suspect it's entirely possible that a similar investment in Grok or chatgpt or any other could produce similar results. And I am sure that at some point there's a wall to hit as to storage and other costs involved in particular platforms. I'm not paying for anything more than a "basic" tier of service at this point. And no matter how much storage and how many data points you load into the system, there are always going to be conflicts between the sources, so ultimately you have to be sure it follows reasoning that you yourself are willing to stand behind.
All this is fascinating and really is a brave new world.