That's a good question - one that IMO relates to the question I've asked before (and I can't answer it) about how we relate to labor as humans. Mind you, there's not going to be an end to human labor for a long time, and likely forever. There will be changes to it, as there always have been, and these changes will be rapid, as they have sometimes been throughout history. We don't know at this point what kind of changes those are and consequently what this means for the educational system. I have some hope that certain basic skills will remain essential and universally required. Think about skills related to reasoning, filtering information, assessing the validity of information, basic awareness about epistemology - and of course certain social skills related to communication, relationship building, groupwork etc.Also, where does education go? "Do good in school, go to college, work hard, get a good job, build a life", what happens when that evaporates as well?
I think the whole talk about AI not being capable enough is bogus really; I'm with @Sean on this. For the most part, it's already there, and insofar as it isn't, it's a matter of a very short amount of time until it gets there. Seems like presently it's mostly the step that needs to be made from "too perfect" towards "sufficiently wabi-sabi to trick us into believing it's real". That puts us in a position where photo-realistic AI is literally just that, and indistinguishable from the real thing.
Where would that leave us?
Well, let's put aside one important matter - when it comes to art, there's a heck of a lot more than photography. The thread asks about AI being used to produce art in a broad sense, so that inherently includes everything else except photography. This spans forms of art that do not need to mimic anything that already exists. There are entirely new avenues that are open for exploration. What's limiting us mostly at this point is the limits to our own imagination and ingenuity. I follow AI 'art' generation with a slanted eye and for the most part, it really is uncreative slop - renders of barely clothed women in violent "Lethal Weapon, Kill Bill" kind of situations (what the heck is wrong with people). Or when it's a little more 'out there', it remains stuck at rehashes of Star Wars and Dune type of imagery of fantasy landscapes or space stations. Mind you, this is all being done with exceeding technical prowess and the images as such are pretty mindboggling/stunning. Particularly creative or artistically compelling they are not, and that's not due to the use of AI. It's due to the fact that most of this 'successful' imagery is made basically by nerds with an interest in IT, and not by artist. Wait until capable hands (minds) start wielding this technology. We're in for a ride, for sure.
The more important issue I'd like to put to the fore is the in my view kind of hilarious argument about AI being 'not good enough' as a means to disqualify it from the domain of the arts. That's not the point, at all, in my view. That AI can be used to make art is not even up for debate in my opinion. See above; the fact that for now, it's mostly relatively uncreative minds using it and this resulting in fairly unartistic slop (including the 'crazy realism' video above) does not mean the potential isn't there. It's like arguing that cars are useless because once in a while someone drives into a tree with one. It's a logical fallacy to discredit the technology on the basis of a non-exhaustive set of counter-examples. Black swans don't exist until the day you see one, and all that. There's no doubt in my mind we'll be seeing plenty of black swans soon enough.
The real question is whether or not we'll accept that art as actual art. And I think, it depends here on the degree to which we recognize a human hand in it. In the end, I think the question what is or isn't art boils down to what @thinkbrown said very early on in the thread; specifically this bit (I do not necessarily agree with what followed):
This limits the scope of what art is to work produced with involvement and arguably control by the human mind. I think in the end, that's really going to be the dividing line. And that automatically means that imagery (or sound, or whatever) that's made essentially by a digital agent with no direct control of a person over the outcome will never be accepted as art by a broader audience - with the 'audience' being defined as people who have an interest in art to begin with. That's an important caveat, because the majority of mankind in my opinion has no real interest in art. People like pretty things, overall, but that's a different matter.
The underlying question is why we would draw a dividing line between 'art' and 'not art' on the basis of human control. In my mind, that's simple - we want to be able to relate to it, that's all. We can't relate to a datacenter any more (and in fact, much less so, on average) than to a pet cat. It's just too different a beast. When it comes to Ai-generated work, one of the key problems is that a datacenter is just too...good. It yields a perfect result, even if we require it to be imperfect - then it'll be perfect in its imperfection. For whatever underlying psychological or evolutionary reason, we can't really cope with that as humans. Probably it's just too threatening in the end and we can't cope with that in the way we usually do.
Think about it - why do people read tabloids? For that matter, why do people read biographies of people like Einstein or Musk? A large part of it is to be able to spot all the imperfections, vices, character faults, misbehaviors and flaws exhibited by the rich and famous. Those people we all somehow measure up against, who are the benchmark of success in one way or another - one of the main ways of dealing with them and the notion that most of us will never really succeed in life the way these happy few have done (that's why they're few, after all), is to focus not just on their success, but also on their failure. Einstein married his niece - LOLWUT! And didn't that video of Musk smoking a joint during a podcast recording go viral? Sex! Drugs! Human weakness!
It's a lame piece of Hollywood trash in many ways, but in some ways, Bicentennial Man is an interesting movie. At what point does Mr Robot get accepted (American-style: in a legal sense) as a human being? That's right - the moment he 'failed' in a human way: by (arguably irrationally) opting for mortality.
It shows in this thread as well. The moment the notion of Ai autonomously creating art arises, the response is essentially "it can't be, it's not supposed to be!" Arguments are all over the place - it's not good enough, it's too good, it can't think, it can't feel etc. In short - it's not human enough. It's insufficiently like me, and if it's not like me, I won't accept it. In the end, that's the criterion, the way I see it. And I think there's nothing wrong with that. If anything, AI is going to help us understand a little better what "being human" involves. Maybe that's the greatest 'innovation' it'll bring.
I think you are saying what I've been trying to say but you're a better writer than I am.
AI is a powerful helpful tool but it can't create 'new' it can rehash and remix 'old' but it can't come up with something creative unless a human is behind it.
I'm not so sure on this, particularly in the light of what @Alan Edward Klein said before about human creativity and innovation. If you analyze instances of creativity and innovation, they are not necessarily always (and actually, most of the time they really aren't) the magic sparks we may like to think they are. I sometimes content that virtually all innovation is recombinant. Likewise, creativity is probably in almost all cases not as random and as big a step as we perceive it to be; it's to a large extent small steps in which existing notions are combined. These look like major, discontinuous leaps if you don't account for these pre-existing notions or inputs. But it's not exactly fair to do so in the case of humans and instead consider them a black box, while in the case of AI we want to open that black box and ask which inputs it combines to get to a certain solution. The tricky bit I think is that how AI (LLM's) is presently modeled is to mimic cognitive processes as we understand them in humans, and those include very specifically the processes underlying creative problem solving.it can't create 'new' it can rehash and remix 'old' but it can't come up with something creative
It's possible we agree on many of the main points. At the same time, perhaps not on all; e.g.:
I'm not so sure on this, particularly in the light of what @Alan Edward Klein said before about human creativity and innovation. If you analyze instances of creativity and innovation, they are not necessarily always (and actually, most of the time they really aren't) the magic sparks we may like to think they are. I sometimes content that virtually all innovation is recombinant. Likewise, creativity is probably in almost all cases not as random and as big a step as we perceive it to be; it's to a large extent small steps in which existing notions are combined. These look like major, discontinuous leaps if you don't account for these pre-existing notions or inputs. But it's not exactly fair to do so in the case of humans and instead consider them a black box, while in the case of AI we want to open that black box and ask which inputs it combines to get to a certain solution. The tricky bit I think is that how AI (LLM's) is presently modeled is to mimic cognitive processes as we understand them in humans, and those include very specifically the processes underlying creative problem solving.
I think AI can be as creative as any person, if you look at the usefulness (a proxy for innovativeness) and discontinuousness (as a measure of radicalness) of the output.
Where there might be very real differences is in which inputs/stimuli effect the process and how these stimuli are weighed. For instance, in humans and particularly when speaking of art, I would expect all manner of emotional and sensory stimuli to play a crucial role. These are innately not present in an LLM and to get similar behavior, we would have to mimic them. It would result in a situation where we would try to make a machine mimic the feeling of a lived human experience.
A philosophically interesting situation, due to this question that looms over everything associated with AI as we presently know it: is exhibiting certain behavior sufficient to draw conclusions about the nature of an entity, or is there something innate that needs to be present that gives rise to this behavior? In other words, if we make a machine that exhibits behavior that we associate with human emotional behavior, is that sufficient to say that the machine is being emotional? Or, put very simply: if it walks like a duck, quacks like a duck, can we safely conclude that it must be a duck? Personally, I don't think so. But we're presently facing a generation of ducks that start to quite easily pass the Turing test, and that brings a secondary question: even if the answer is strictly speaking "no", does it still matter practically speaking? I.e. even if we know it's not a duck, if it does the walk and the quack, and we can reasonably expect it to taste OK if we shoot & cook it, does it still matter how much true 'duckness' it contains? Tricky stuff, that.
We're only a small step away from this...
View attachment 415162
AI would never write a song like Creep
As I argued before, I think we need to distinguish between our own imagination and what tools can generate (with or without some help). The answer to the 'can you see...' question is limited primarily by my own imagination.Can you see
Can you see AI unprompted creating something like a unique Star Wars universe?
I think the whole talk about AI not being capable enough is bogus really; I'm with @Sean on this. For the most part, it's already there, and insofar as it isn't, it's a matter of a very short amount of time until it gets there. Seems like presently it's mostly the step that needs to be made from "too perfect" towards "sufficiently wabi-sabi to trick us into believing it's real". That puts us in a position where photo-realistic AI is literally just that, and indistinguishable from the real thing.
Where would that leave us?
Well, let's put aside one important matter - when it comes to art, there's a heck of a lot more than photography. The thread asks about AI being used to produce art in a broad sense, so that inherently includes everything else except photography. This spans forms of art that do not need to mimic anything that already exists. There are entirely new avenues that are open for exploration. What's limiting us mostly at this point is the limits to our own imagination and ingenuity. I follow AI 'art' generation with a slanted eye and for the most part, it really is uncreative slop - renders of barely clothed women in violent "Lethal Weapon, Kill Bill" kind of situations (what the heck is wrong with people). Or when it's a little more 'out there', it remains stuck at rehashes of Star Wars and Dune type of imagery of fantasy landscapes or space stations. Mind you, this is all being done with exceeding technical prowess and the images as such are pretty mindboggling/stunning. Particularly creative or artistically compelling they are not, and that's not due to the use of AI. It's due to the fact that most of this 'successful' imagery is made basically by nerds with an interest in IT, and not by artist. Wait until capable hands (minds) start wielding this technology. We're in for a ride, for sure.
The more important issue I'd like to put to the fore is the in my view kind of hilarious argument about AI being 'not good enough' as a means to disqualify it from the domain of the arts. That's not the point, at all, in my view. That AI can be used to make art is not even up for debate in my opinion. See above; the fact that for now, it's mostly relatively uncreative minds using it and this resulting in fairly unartistic slop (including the 'crazy realism' video above) does not mean the potential isn't there. It's like arguing that cars are useless because once in a while someone drives into a tree with one. It's a logical fallacy to discredit the technology on the basis of a non-exhaustive set of counter-examples. Black swans don't exist until the day you see one, and all that. There's no doubt in my mind we'll be seeing plenty of black swans soon enough.
The real question is whether or not we'll accept that art as actual art. And I think, it depends here on the degree to which we recognize a human hand in it. In the end, I think the question what is or isn't art boils down to what @thinkbrown said very early on in the thread; specifically this bit (I do not necessarily agree with what followed):
This limits the scope of what art is to work produced with involvement and arguably control by the human mind. I think in the end, that's really going to be the dividing line. And that automatically means that imagery (or sound, or whatever) that's made essentially by a digital agent with no direct control of a person over the outcome will never be accepted as art by a broader audience - with the 'audience' being defined as people who have an interest in art to begin with. That's an important caveat, because the majority of mankind in my opinion has no real interest in art. People like pretty things, overall, but that's a different matter.
The underlying question is why we would draw a dividing line between 'art' and 'not art' on the basis of human control. In my mind, that's simple - we want to be able to relate to it, that's all. We can't relate to a datacenter any more (and in fact, much less so, on average) than to a pet cat. It's just too different a beast. When it comes to Ai-generated work, one of the key problems is that a datacenter is just too...good. It yields a perfect result, even if we require it to be imperfect - then it'll be perfect in its imperfection. For whatever underlying psychological or evolutionary reason, we can't really cope with that as humans. Probably it's just too threatening in the end and we can't cope with that in the way we usually do.
Think about it - why do people read tabloids? For that matter, why do people read biographies of people like Einstein or Musk? A large part of it is to be able to spot all the imperfections, vices, character faults, misbehaviors and flaws exhibited by the rich and famous. Those people we all somehow measure up against, who are the benchmark of success in one way or another - one of the main ways of dealing with them and the notion that most of us will never really succeed in life the way these happy few have done (that's why they're few, after all), is to focus not just on their success, but also on their failure. Einstein married his niece - LOLWUT! And didn't that video of Musk smoking a joint during a podcast recording go viral? Sex! Drugs! Human weakness!
It's a lame piece of Hollywood trash in many ways, but in some ways, Bicentennial Man is an interesting movie. At what point does Mr Robot get accepted (American-style: in a legal sense) as a human being? That's right - the moment he 'failed' in a human way: by (arguably irrationally) opting for mortality.
It shows in this thread as well. The moment the notion of Ai autonomously creating art arises, the response is essentially "it can't be, it's not supposed to be!" Arguments are all over the place - it's not good enough, it's too good, it can't think, it can't feel etc. In short - it's not human enough. It's insufficiently like me, and if it's not like me, I won't accept it. In the end, that's the criterion, the way I see it. And I think there's nothing wrong with that. If anything, AI is going to help us understand a little better what "being human" involves. Maybe that's the greatest 'innovation' it'll bring.
Are you sure?
It just had to listen to the Hollies - The Air that I Breathe
As I argued before, I think we need to distinguish between our own imagination and what tools can generate (with or without some help). The answer to the 'can you see...' question is limited primarily by my own imagination.
What do you mean "unprompted"?
Since we are babies we had always received prompts
You mean the distortion before the refrain?I pointed out the song specifically due to the guitar break.
Not strictly speaking. It's evident that LLM's as they are now are producing things that are different from what they've been trained on. They produce on a certain basis, and they don't strictly reproduce. As to your examples - note that works like LOTR and Star Wars, formidably successful as they have proven to be, are strictly speaking recombinant and incremental innovations. Both can e.g. be placed in storytelling traditions that precede both works for decades or even centuries (and longer). So yes, I'm convinced that AI as such is or in due course will be capable of producing the same type of recombinant, incremental innovation.Programs no matter how sophisticated can only do what's already been done
I think "willfull" is a problematic term in this context. If you mean to say that "no program is capable of it" - well, I regard that as an axiomatic statement. We'll see, but I think the odds are against your statement.No program is going to willfully do that.
We drive it. It exists because we built and continue to build it.An AI has no drive, no reason of being.
You mean the distortion before the refrain?
Not strictly speaking. It's evident that LLM's as they are now are producing things that are different from what they've been trained on. They produce on a certain basis, and they don't strictly reproduce. As to your examples - note that works like LOTR and Star Wars, formidably successful as they have proven to be, are strictly speaking recombinant and incremental innovations. Both can e.g. be placed in storytelling traditions that precede both works for decades or even centuries (and longer). So yes, I'm convinced that AI as such is or in due course will be capable of producing the same type of recombinant, incremental innovation.
Mind you, recombinant and incremental innovations can be phenomenally successful, and that success can result in dramatic changes to social and economic structures. We have plenty of examples to choose from. But if they are studied closely, they virtually never seem to be technically discontinuous - even though they're touted to be, even in academic circles. They are discontinuous only if you look at them from sufficient distance and without accounting for the many precedents there have been.
I think "willfull" is a problematic term in this context. If you mean to say that "no program is capable of it" - well, I regard that as an axiomatic statement. We'll see, but I think the odds are against your statement.
We drive it. It exists because we built and continue to build it.
Looking at your argument from the other side, the human side: humans strictly speaking also have no reason of being. For all we know, we popped up as a happy little accident in a massively complex cosmic dance. And we'll disappear in that same dance, coincidentally. As to drive - it's an evolutionary tautology. A species without the will to live gets eaten, unless some other species keeps it propped up for its own interest. We can understand AI (and all other man-made technology, as well as most domesticated animals) in similar terms.
I think part of it is the lack of our own imagination that also resounds in your response; you "find it difficult to believe" - well, there's a lot of stuff that happens in the cosmos that's difficult to believe. It doesn't stop DNA from replicating - one of those things that's so fragile and at the same time so robust that many people just downright refuse to believe that it wasn't purposefully designed by some entity (now that I personally find difficult to believe).Since AI seems to be programmed to learn from what it is fed or finds on the internet, I find it difficult to believe it can really be creative. After all, much creative work is reactionary rather than evolutionary. In art and photography, many movements came about as artists wanted to distance themselves from the status quo, Impressionism, f64, New Topographics, all breaking from tradition at the time. Can AI do that? Even prompted to, where would it turn for source material or even inspiration (if that is even possible).
Since AI seems to be programmed to learn from what it is fed or finds on the internet, I find it difficult to believe it can really be creative. After all, much creative work is reactionary rather than evolutionary. In art and photography, many movements came about as artists wanted to distance themselves from the status quo, Impressionism, f64, New Topographics, all breaking from tradition at the time. Can AI do that? Even prompted to, where would it turn for source material or even inspiration (if that is even possible).
At the same time, I suspect that whenever someone says "anything goes" or something along those lines, it's too easily misunderstood to be intended as a guiding principle. Let me be very clear - it is not, in my case.The only principle that does not inhibit progress is: anything goes.
This is not a ‘principle’ I hold. It is the terrified exclamation of a rationalist who takes a closer look at history.
Don't confuse difficult to believe with difficult to understand.well, there's a lot of stuff that happens in the cosmos that's difficult to believe.
Learning from errors does not necessarily mean it can make leaps in thinking. Can AI have a "eureka moment?" https://connectsci.au/news/news-par...s-can-be-predicted-before-they?searchresult=1This is very wrong. The new generation is recursively fed, so they learn from their errors.
Getting a bit off topic, but one of these sure could come in handy to lug gear for you
Falls over like one, too.Walks like an old man.
That's a valid point, but the latter can easily result in the former, and in the context of Ai-related threads, it seems to happen a heck of a lot.Don't confuse difficult to believe with difficult to understand.
What amounts to your view is 'wait and see' while I retain a more empirical skeptical view of 'if it happens'
I guess when the Roombas come for us I'll be rounded up with the other Humanists as I'll reject our robot overlords.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?