Yes sorry for thatedited: "polling and stitching" are Alan Edward Klein's words, not mine; did you meant to reply to him?
If true, you seem to be making the case that AI can create?
Yes sorry for that
Yes I do. But whether it is art or not maybe we need a new definition of art.
It would for sure demonstrate excellent craftmanship.
But art in its deepest existential form? I am not sure. For me art in a very deep way is strongly related to death. And if they cannot experience this agony and metaphysical feeling of passing of time then why create and how should we approach it?
All true. But he is a murderer, not a tradesman. Pehaps his regular hammer was left at the crime scene, and the tintype studio provided this one as a prop?There's no rust or wear on the hammer. Hammer is the wrong side, not proportionate. Handle is too long, head doesn't fit. Also, what tradesman holds a hammer like this. The AI looked at thousands of tintypes from the era and stuck a hammer in where a six-gun would have been. The hand placement in correct, everything else it got wrong.
This is the heart of the problem will all discussions about art -- how do you define it?But whether it is art or not maybe we need a new definition of art.
I'd point these out as having AI red flags if I ran across them in the wild. There's just something subtly off about the photos.
I agree.
6-12 months down the road, or perhaps even today with a little bit attention to the prompting, that would probably disappear and neither you or I would recognize these images as AI-made.
We are then left with the inconvenient realization that much of the photography we're used to isn't very original, at least as viewed on a computer screen. Which might feel a little uncomfortable at first, but I think in the end, it'll bring a couple of possibly more comforting ideas.
Firstly, as we all know on this forum, a computer screen is a poor facsimile of a real print - and there's still a degree of magic to seeing, fondling and admiring a real, physical print. Of course, there's no real print underlying @Sean's examples; they're visually pleasing images as such, but no tangible artifact underlies them.
Secondly, once we have acknowledge that the vast majority of the photography that's made (including by us and certainly by myself) isn't particularly original. The creative aspect and perhaps also the intrinsic value lies in the hands-on nature of the process, and/or the fact that we have shaped the end result on every step along the way. I.e. it's not just the end result that represents the value, it's also and to a large extent the road that leads up to it.
AI can't touch that.
It's a different question, though, from the one this thread started with.
Also, as to your example of the comic group you're part of - your complaint the way I see it is with this one guy who doesn't want to play by your rules. The fact that he uses AI is of lesser importance. Had he made a poor ripoff version with his own hands, or had he outsourced it to Pakistan, you might have felt quite the same. It's not the car's fault that the dog was run over. The driver had a lot to do with it.
Can you share them with us?
These are two recent examples. I would not be surprised if there are many more. We could be in a situation where LLMs take a back seat or become one of many layers to something more (something we may not even understand "black box" issue).
- Grounded Reasoning: Because it builds a "Universal Simulator," it doesn't just predict the next word; it simulates the outcome of an action before taking it.
- Self-Correction: It can identify when an action fails and update its own "world model" internally, which is a hallmark of general intelligence.
(VL_JEPA) https://arxiv.org/abs/2512.10942
"The release of VL-JEPA (Vision-Language Joint Embedding Predictive Architecture) by Meta AI represents a significant shift in the philosophy of AI development. While it is not "AGI in a box," many researchers—most notably Meta’s Chief AI Scientist Yann LeCun—believe it addresses the fundamental flaws of current Large Language Models (LLMs) that prevent them from reaching human-level intelligence."
AGI may require a convergence of emerging technologies
"It is unlikely that any single model—be it VL-JEPA or the Integral model—will be AGI on its own. Instead, the consensus in 2026 is that AGI will likely emerge from an integrative system:
- VL-JEPA provides the visual/linguistic world model.
- The Integral Model provides the ability to plan and act.
- A Reasoning Engine (like OpenAI’s "o" series or DeepMind’s AlphaProof) provides the formal logic."
Then there are the things going on behind closed doors, with some of the big players hinting at "new science" capable ais in early 2026.
I'd also add I don't think we even need AGI for incredible advances. Have a look at what Google's Isomorphic Labs and AlphaProteo are doing. https://www.isomorphiclabs.com. Their AI designed proteins could solve countless diseases.
The next 2yrs are going to be super interesting.
I think one could argue that the process you describe is pretty much how many human artists work as well.
There is a saying, the gist of which goes back many years, which says, “Good artists copy. Great artists steal.”
I will study them carefully since the topic really interests me. As a remark though since I have been in the AI field for some time take with a grain salt what these big companies are saying. Usually they only care about their stock value so they publish ridiculous claims.
For me the road to AGI is really far. We would need total “immersion” a system that could be “embedded” in the world real time and constantly learn from all kind of interactions with it, similar to a newborn
Sort of like a dumb kaleidoscope. Where's the innovation?Well I wouldn’t be sure Alan. They have recently solved mathematical problems humans have struggled for ages.
They don’t really copy but combine ideas, patterns, identifying connections in the most novel ways. Isn’t that what creativity is also about?
As a computer scientist I don’t think AGI is even close yet we already have reached some of their limitations. How many more data can you give them? You already fed them the goddamn internet.
So AI scientists are in the search of novel algorithms and causal representations to move AI research further. But at this point I wouldn’t even worry
My four-year old grandson already laughs at jokes, feels sad at times, all because he has a sense of himself, has an ego. He has feelings. He is a moral being with a heart. The learning process for humans is not only about adding 2+2 or figuring out data and displaying it in some form. But also assessing whether the answer revolves around issues of humaneness, moral clarity, and provides real inspiration. Otherwise, a human would be like a computer, a high-speed idiot processing ones and zeros very quickly, but with no contemplation of the world. Computers, and AI are just extensions of those; they are just that. They're great at processing math and huge amounts of data quickly, better than humans. But they don't know what to do with it unless we program the analysis. It's like looking at your monitor screen and claiming how smart our computers are, that they can show us all this stuff as they do. Does anyone think their TV and monitors are smart, much less innovative?I will study them carefully since the topic really interests me. As a remark though since I have been in the AI field for some time take with a grain salt what these big companies are saying. Usually they only care about their stock value so they publish ridiculous claims.
For me the road to AGI is really far. We would need total “immersion” a system that could be “embedded” in the world real time and constantly learn from all kind of interactions with it, similar to a newborn
All true. But he is a murderer, not a tradesman. Pehaps his regular hammer was left at the crime scene, and the tintype studio provided this one as a prop?
My point being, some images are created by human photographers to evoke a feeling or create a mood, and strict adherence to factual details is not always a requirement. The movies we love to watch are good examples of how we sometimes like to be fooled by the fantastical -- to suspend our belief and just enjoy the spectacle. Not every movie has to be made to Ken Burns standards, and not every photograph has to be accurate documentation to be successful.
Personally, as a viewer, I can enjoy the tintype of hammer man for what it is, without paying too much attention to the details.
Now if it had been me asking Nanobanana to create this image, I probably would have added more prompts to make the hammer bigger, or whatever. If the hammer is inappropriate for this image, is it AI's fault for not getting it right? Or is it Sean's fault for not asking the right prompts?
My four-year old grandson already laughs at jokes, feels sad at times, all because he has a sense of himself, has an ego. He has feelings. He is a moral being with a heart. The learning process for humans is not only about adding 2+2 or figuring out data and displaying it in some form. But also assessing whether the answer revolves around issues of humaneness, moral clarity, and provides real inspiration. Otherwise, a human would be like a computer, a high-speed idiot processing ones and zeros very quickly, but with no contemplation of the world. Computers, and AI are just extensions of those; they are just that. They're great at processing math and huge amounts of data quickly, better than humans. But they don't know what to do with it unless we program the analysis. It's like looking at your monitor screen and claiming how smart our computers are, that they can show us all this stuff as they do. Does anyone think their TV and monitors are smart, much less innovative?
But that raises a question. If the AI program can't tell the difference between a knife and a hammer, or when one or the other is appropriate, how can you expect it to be innovative?
Regarding the second point about the correct prompts, if it requires the operator to ask the correct prompts to get a result that makes sense or maybe even be innovative, then it's the operator who's being innovative, not the AI program. AI is no more than a paintbrush. But we have to point the way to creativity.
So it's the human who's creative, not AI. Without the correct human prompts, AI is lost and creates nonsense. Just like you need a human hand for a paintbrush to make a painting.I agree.
6-12 months down the road, or perhaps even today with a little bit attention to the prompting, that would probably disappear and neither you or I would recognize these images as AI-made.
We are then left with the inconvenient realization that much of the photography we're used to isn't very original, at least as viewed on a computer screen. Which might feel a little uncomfortable at first, but I think in the end, it'll bring a couple of possibly more comforting ideas.
Firstly, as we all know on this forum, a computer screen is a poor facsimile of a real print - and there's still a degree of magic to seeing, fondling and admiring a real, physical print. Of course, there's no real print underlying @Sean's examples; they're visually pleasing images as such, but no tangible artifact underlies them.
Secondly, once we have acknowledge that the vast majority of the photography that's made (including by us and certainly by myself) isn't particularly original. The creative aspect and perhaps also the intrinsic value lies in the hands-on nature of the process, and/or the fact that we have shaped the end result on every step along the way. I.e. it's not just the end result that represents the value, it's also and to a large extent the road that leads up to it.
AI can't touch that.
It's a different question, though, from the one this thread started with.
Also, as to your example of the comic group you're part of - your complaint the way I see it is with this one guy who doesn't want to play by your rules. The fact that he uses AI is of lesser importance. Had he made a poor ripoff version with his own hands, or had he outsourced it to Pakistan, you might have felt quite the same. It's not the car's fault that the dog was run over. The driver had a lot to do with it.
Alan, if we set aside any philosophical, religious, or metaphysical considerations, which I’ve deliberately avoided here, there are today AI researchers who believe that some behaviors observed in large scale AI models resemble processes associated with human sentience!
I’m not presenting this as a fact, but rather to show that the skepticism is genuine, and that many experts believe we may be approaching something truly groundbreaking.
I think AI is being overhyped, either out of greed by corporations and investors or out of the exciting belief we've discovered a new god, or at least a new superhero. I think we're going to find out its just another tool to be used by humans to allow us to work quicker and more effectively, creating things as an extension of our own minds.
Could you show us any of that work at this point?
So it's the human who's creative, not AI. Without the correct human prompts, AI is lost and creates nonsense. Just like you need a human hand for a paintbrush to make a painting.
What makes you think AI cannot tell the difference between a knife and a hammer? I'm pretty sure if Sean had asked for a knife or a hammer in the prompt, he would have got what he asked for.But that raises a question. If the AI program can't tell the difference between a knife and a hammer, or when one or the other is appropriate, how can you expect it to be innovative?
What makes you think AI cannot tell the difference between a knife and a hammer? I'm pretty sure if Sean had asked for a knife or a hammer in the prompt, he would have got what he asked for.
If I understand correctly, Sean was giving examples of what AI comes up with if given only minimal prompts(?) The prompt for the tintype was simply, "Do a tintype" - which suggests a style, but it is very nonspecific about subject matter.
Many historical tintypes were portraits, so the subject of the AI image is not surprising. But the fact that a hammer appears in this AI image -- without being prompted -- is interesting! If I didn't know better, I might think the AI was having a little fun with us. ;-)
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?