A lot of what you cite has already started without AI. People's dependency on the internet has severely limited independent thinking, rote tasks such as spelling and arithmetic, knowledge of history and so much more.
Actually, letting AI deal with spelling, grammar, arithmetic, and other rote tasks will allow humans more time to think.
Maybe, but it reduces the quality of their thinking. Disciplines like math and grammar require precision and critical thinking, aspects that are quickly diminishing among younger generations.
So I was skeptical in my first attempts using ChatGPT. Someone suggested I try Claude.ai.
My use case was oriignally for language learning... I am learning a language that is not widely used and has about 8 million speakers. I have found that claude is VERY good at producing native fluent constructions of sentences and texts. By this I mean written in the manner in which a native would phrase it including idioms. These are cases where normal translators like google translate completely miss the mark.
I am in. I continue to experiment with other use cases. One big thing with these ai is to not just accept the first answer to a complicated question. Probing questions and dialog create much better results. It is sort like just accepting the google search result versus really probing, following link to other sources, etc.
Adding 2+2 isn't critical thinking. If it is, we should get rid of calculators and Spellcheck. Critical thinking is understanding how inflation works and how it affects your paycheck and what to do to protect against it. Critical thinking is examining what photographic art is, not forgetting a comma.
AI requires human beings to copy.
Soon to be out of a job
Models for swimsuit catalogs
Annoying customer service phone operators
Porno movie actors
Etc.
Eventually the Silconoids will kill us all. But not before we are enslaved to clean up all the single use plastic yogurt containers and Kruerig pods everywhere.
Might interest some here: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf
No, critical thinking is actively analyzing and evaluating rather than passively accepting information.
Many here may not know of the works of Donald Knuth, but no surprise that he would find the latest iterations of AI models to be thought provoking.
AI is a wonderful tool that is going to enormously accelerate human achievement and learning - for those that embrace it rather than fear it.
My son, who is a final year university engineering student is being encouraged to use AI tools, it is in fact a mandatory part of the syllabus.
AI could copy the style of any photographer, but it would be made and not real.
Knuth (deep bow)! Back in another life before I retired I was a factory automation specialist - vison guided robotics and stuff like that. I can only imagine how much more effective I would have been had ai been around at even todays level.
I have been working on a web based automation project and using claude code cli as my assistant. The productivity is incredible! A task like refactoring code which can be a real vipers nest can be absolutely trivialised by claude. Debugging code? Once again trivial.
There are cavets. Claude needs guidance. Claude needs someone that understands good practices in the field to guide it. Claude will take shortcuts that result in poor practice or solutions that aren't universal and usable else where. But wow, with a little guidance in a technical domain it is outstanding.
AI is a tool. Like a knife, it can be used for good and bad. You can cut up a chicken, or your neighbor.Ah the good side of technology. I worked for a year as an analytical chemist for General Mills, this was over 30 years ago. One of the products was a variation of Fruit Rollups. God awful fruit pectin and corn syrup sold as an alternative to "candy" . Early technology was using video to recognize the pattern of the product for qc.
I was amazed.
Unfortunately AI is going to be used to mislead and market. It's also going to revolutionize medical care, help people with every little thing. Drunk driving, car won't let it happen etc.
Good and bad.

AI/LLMs aren't true intelligence and work in "predictive" ways. At their core they predict what the next word, pixel, or result will be and chooses the "best" option.
You just have to use this software for what it’s intended for: summarizing or generating texts based on what the internet knows.
Yup, it's clear @tjwspm hasn't had a lot of recent LLM runtime experience.You're describing the state of play 2-3 years ago. Not now.
Yup, it's clear @tjwspm hasn't had a lot of recent LLM runtime experience.
While it's technically true that it's predictive and that it doesn't really 'think' or have a concept of reality, if you aggregate the seemingly dumb 'summarizing and generating texts' to a higher level, it does things that rival and usually exceed the human ability to do things.
A good example is the generation of code. I've been using AI (Copilot and Claude) quite intensively for this (hobby projects). In the very recent past, it was kind of hit & miss whether a snippet of code would even compile, let alone do what you'd want it to do. Presently, we're at the point that the commercially available generation of e.g. Claude more or less autonomously can extend a project by adding modules, within an existing architecture, and make architectural changes as required to accommodate the extension of functionality. Likewise, it can build entire architectures from the ground up, and then implement them. For amusement's sake, the other day I had Copilot generate a specification and then asked Claude to implement it. The result just worked.
That's not to say I find the architectural decisions optimal or that I necessarily like them. I find that they generally deviate from how I had intuitively envisioned it, but I also recognize that this is mostly because I implicitly adhere to a couple of ground rules that I often fail to make explicit in my prompts (for the simple reason that I don't realize they're in the back of my head), and as a result, the LLM arrives at its own optimization of the problem. Currently, those optimizations and architectures vastly exceed what I could set up in a reasonable amount of time. AI just has a lot bigger repository of potential solutions that it can readily pick & choose from. It doesn't suffer from the same degree of mental limitation as my human brain.
This is just one specific example, and of course one that AI has been optimized for. That's also a matter of tactical choice on behalf of the AI companies: to make tools to help them develop the next generation of tools. It would be silly not to, after all.
Arguments involving "it's just this or that" are suspect by definition. The technology presently catches up with those arguments before they're even typed out.
| Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links. To read our full affiliate disclosure statement please click Here. |
PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY: ![]() |
