• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

I used AI and I like it

It’s fascinating to see that Claude is now making architectural decisions for you that exceed your own capabilities.
Have you tried it? Yes or no? At what point do you do better?

A flight simulator also flies incredibly realistically without ever leaving the ground.
You know that this is a fundamentally different concept. The way you referred to 'algorithms' earlier suggests that you consider them as similar. They're not. There's a fundamental difference.

Moreover, you bring up (probably not deliberately) the question how much walking and quacking are needed to constitute true duckness. I suppose we will all agree that a flight simulator does not constitute the full reality of actual flight. For one thing, the odds of crashing & burning are a lot slimmer. Then again, the simulator apparently is real enough to function as a replacement of a large portion of real-world, physical training. It has a fairly high 'duckness score'.

Similarly, at some point, the question becomes inevitable how we define intelligence. I think currently we agree that an LLM is not intelligent in the human sense. My example about programming was to show that despite this lack of proper intelligence, which you seem to regard as a sign of inferiority, the ability of the model to perform work that we until recently thought was reserved exclusively for 'proper' human intelligence. Despite its shortcomings, LLM's seem to be able to do a fair deal of walking and quacking, so it's easy to see how they could functionally substitute for a duck - until dinner time comes, of course.

One can certainly acknowledge the impressive performance of the current generation without losing sight of the sober mathematical reality of the 'next-token predictor.
The human brains is also just the sober biochemical reality of neurons firing. Very ho-hum, modest and uninspiring. Surely, it won't amount to much.

Trust me, I know the real-world strengths & weaknesses and how they relate to the conceptual underpinnings of how present-day AI works. Regardless of the shortcomings, it's too easy to say "it's just this or that" - the reductionist logic "just" doesn't get you very far. Also, I figure it's not so much about truly wanting to understand what's going on or reflect on it, but more about finding a way to maintain a position of superiority vis-a-vis the technology through a rhetoric device. I understand that tendency/desire especially in the face of such rapid developments that many experience as a threat (sometimes, but not always, rightly so). Fear not, though - the machine won't strip you of your humanity. There'll always be that.

What I'm saying is that your earlier post fits in a long and rapidly expanding narrative of trying to bring a normative dimension to how we evaluate the present (and continuously evolving) capabilities of LLM's/AI. I frankly just don't get that. It doesn't get you anywhere at all. It helps nobody. It just sounds sour and frankly a bit like someone would prefer to huddle in a corner with a newspaper over their head. I don't think that's a very sensible way to relate to changes in society.

Sorry to be so critical of what you said, but it just seemed (still does) like a very solid underestimation of what LLM's are presently doing and how they might (will) affect society. I'm referring to what you said here, emphasis added:
You just have to use this software for what it’s intended for: summarizing or generating texts based on what the internet knows.
That's just scratching the surface of today's applications - let alone tomorrow's!
 
Last edited: