BrianShaw
Member
So it seems that ChatGPT knows little about:
I say all this as technology professional who finds the tech fascinating and uses it regularly.
CharGPT answers are always well formulated - but they are only rarely accurate when it comes to more technically involved topics on which online available information is inconsistent or incomplete. ChatGPT is in the end a language model that's trained on existing and relatively easily available (online) data, which makes it particularly vulnerable to the old adage of Garbage In, Garbage Out. Contrary to what the moniker suggests, AI is not actually intelligent in the sense that it can weigh, interpret and assess information.
This assumption is quite widespread and the role of online datasets is often exaggerated.
But what's interesting is that LLMs aren't good at admitting and clearly saying: I don't know.
Then you're also aware that the 'algorithms' behind language models like ChatGPT et al. are not ones that are easily modified other than through the selection of training data. The algorithm as such is a black box, even to those who helped make it. This leaves the question to what extent the choice of training data is deliberately tailored by the agents/institution you mention towards specific preferences. This is likely not a yes/no answer, but a massive grey area, and some biases may turn out not to be all that deliberate, but simply the result of opportunistic selection of training data or the same kind of biases and oversight every human is liable to.
Having said this, from the perspective of a moderator I need to remark that exploring the nature of these AI/language models is OK, but hypothesizing (let alone anything that even remotely approaches proselytizing) about the political force of tech companies is likely to be qualified as 'political' and we may/will cut such debate short for this reason.
This assumption is quite widespread and the role of online datasets is often exaggerated. The OpenAI team has always been very much aware of the "online garbage problem" and the datasets used for training are highly curated. I am not too close to them, but I am fairly sure that online forum data wasn't used for training.
But what's interesting is that LLMs aren't good at admitting and clearly saying: I don't know. Instead, they proceed full speed to advice giving. In that sense they behave exactly like a typical photrio or photo.net regular.![]()
Understood and I wasn't trying to go down that rathole. I was only trying to point out that potential for silent, intentional bias exists and has been demonstrated.
More broadly, AIs won't seem truly human until they exhibit ALL of our vices by being biased, stubborn, short sighted, ill tempered, intolerant, and generally mean spirited. Only then will they actually pass the Turing Test![]()
No worries - and yes, agreed, there are problems with bias and by extension, ethics. Heck, it's a massive expanse of quicksand!
So far, the big ones are just too polite to qualify!
Not much; it's a bet I'd be bound to lose on. I'm sure our guy Murphy is hard at work on this one, too.How much you want to wager that someone is working on the Drunken Uncle personality on top of OpenAI ...
Lets get back to the OP's question. How to use CGTP to actually solve that emulsion questino.
Thank you for keeping an eye on that. Lets see what he says.I don't think that the OP asked that.
Instead, the OP related some experience with ChatGPT, and invited comment.
I was asking ChatGPT a lot of questions about the history of films a while back. It was getting almost every answer wrong.
Unfortunately people are using it for fake reviews, fake websites now, which is going to make it even harder to find the signal in the noise.
I use it for writing macros and arduino program and it never makes syntax error
I dont think so. For Python maybe.
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links. To read our full affiliate disclosure statement please click Here. |
PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY: ![]() |