• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

Can AI be used to produce art?

Forum statistics

Threads
201,613
Messages
2,827,203
Members
100,850
Latest member
timpanic
Recent bookmarks
0

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
26,588
Location
Europe
Format
Multi Format
Also, where does education go? "Do good in school, go to college, work hard, get a good job, build a life", what happens when that evaporates as well?
That's a good question - one that IMO relates to the question I've asked before (and I can't answer it) about how we relate to labor as humans. Mind you, there's not going to be an end to human labor for a long time, and likely forever. There will be changes to it, as there always have been, and these changes will be rapid, as they have sometimes been throughout history. We don't know at this point what kind of changes those are and consequently what this means for the educational system. I have some hope that certain basic skills will remain essential and universally required. Think about skills related to reasoning, filtering information, assessing the validity of information, basic awareness about epistemology - and of course certain social skills related to communication, relationship building, groupwork etc.
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,005
Format
35mm
I think the whole talk about AI not being capable enough is bogus really; I'm with @Sean on this. For the most part, it's already there, and insofar as it isn't, it's a matter of a very short amount of time until it gets there. Seems like presently it's mostly the step that needs to be made from "too perfect" towards "sufficiently wabi-sabi to trick us into believing it's real". That puts us in a position where photo-realistic AI is literally just that, and indistinguishable from the real thing.

Where would that leave us?

Well, let's put aside one important matter - when it comes to art, there's a heck of a lot more than photography. The thread asks about AI being used to produce art in a broad sense, so that inherently includes everything else except photography. This spans forms of art that do not need to mimic anything that already exists. There are entirely new avenues that are open for exploration. What's limiting us mostly at this point is the limits to our own imagination and ingenuity. I follow AI 'art' generation with a slanted eye and for the most part, it really is uncreative slop - renders of barely clothed women in violent "Lethal Weapon, Kill Bill" kind of situations (what the heck is wrong with people). Or when it's a little more 'out there', it remains stuck at rehashes of Star Wars and Dune type of imagery of fantasy landscapes or space stations. Mind you, this is all being done with exceeding technical prowess and the images as such are pretty mindboggling/stunning. Particularly creative or artistically compelling they are not, and that's not due to the use of AI. It's due to the fact that most of this 'successful' imagery is made basically by nerds with an interest in IT, and not by artist. Wait until capable hands (minds) start wielding this technology. We're in for a ride, for sure.

The more important issue I'd like to put to the fore is the in my view kind of hilarious argument about AI being 'not good enough' as a means to disqualify it from the domain of the arts. That's not the point, at all, in my view. That AI can be used to make art is not even up for debate in my opinion. See above; the fact that for now, it's mostly relatively uncreative minds using it and this resulting in fairly unartistic slop (including the 'crazy realism' video above) does not mean the potential isn't there. It's like arguing that cars are useless because once in a while someone drives into a tree with one. It's a logical fallacy to discredit the technology on the basis of a non-exhaustive set of counter-examples. Black swans don't exist until the day you see one, and all that. There's no doubt in my mind we'll be seeing plenty of black swans soon enough.

The real question is whether or not we'll accept that art as actual art. And I think, it depends here on the degree to which we recognize a human hand in it. In the end, I think the question what is or isn't art boils down to what @thinkbrown said very early on in the thread; specifically this bit (I do not necessarily agree with what followed):

This limits the scope of what art is to work produced with involvement and arguably control by the human mind. I think in the end, that's really going to be the dividing line. And that automatically means that imagery (or sound, or whatever) that's made essentially by a digital agent with no direct control of a person over the outcome will never be accepted as art by a broader audience - with the 'audience' being defined as people who have an interest in art to begin with. That's an important caveat, because the majority of mankind in my opinion has no real interest in art. People like pretty things, overall, but that's a different matter.

The underlying question is why we would draw a dividing line between 'art' and 'not art' on the basis of human control. In my mind, that's simple - we want to be able to relate to it, that's all. We can't relate to a datacenter any more (and in fact, much less so, on average) than to a pet cat. It's just too different a beast. When it comes to Ai-generated work, one of the key problems is that a datacenter is just too...good. It yields a perfect result, even if we require it to be imperfect - then it'll be perfect in its imperfection. For whatever underlying psychological or evolutionary reason, we can't really cope with that as humans. Probably it's just too threatening in the end and we can't cope with that in the way we usually do.

Think about it - why do people read tabloids? For that matter, why do people read biographies of people like Einstein or Musk? A large part of it is to be able to spot all the imperfections, vices, character faults, misbehaviors and flaws exhibited by the rich and famous. Those people we all somehow measure up against, who are the benchmark of success in one way or another - one of the main ways of dealing with them and the notion that most of us will never really succeed in life the way these happy few have done (that's why they're few, after all), is to focus not just on their success, but also on their failure. Einstein married his niece - LOLWUT! And didn't that video of Musk smoking a joint during a podcast recording go viral? Sex! Drugs! Human weakness!

It's a lame piece of Hollywood trash in many ways, but in some ways, Bicentennial Man is an interesting movie. At what point does Mr Robot get accepted (American-style: in a legal sense) as a human being? That's right - the moment he 'failed' in a human way: by (arguably irrationally) opting for mortality.

It shows in this thread as well. The moment the notion of Ai autonomously creating art arises, the response is essentially "it can't be, it's not supposed to be!" Arguments are all over the place - it's not good enough, it's too good, it can't think, it can't feel etc. In short - it's not human enough. It's insufficiently like me, and if it's not like me, I won't accept it. In the end, that's the criterion, the way I see it. And I think there's nothing wrong with that. If anything, AI is going to help us understand a little better what "being human" involves. Maybe that's the greatest 'innovation' it'll bring.

I think you are saying what I've been trying to say but you're a better writer than I am.

AI is a powerful helpful tool but it can't create 'new' it can rehash and remix 'old' but it can't come up with something creative unless a human is behind it.
 

Sean

Admin
Admin
Allowing Ads
Joined
Aug 29, 2002
Messages
13,576
Location
New Zealand
Format
Multi Format
I think you are saying what I've been trying to say but you're a better writer than I am.

AI is a powerful helpful tool but it can't create 'new' it can rehash and remix 'old' but it can't come up with something creative unless a human is behind it.

It is being said ai will create new science starting the first half of this year. I'm not sure how new science compares to creative arts, but if it creates new science then crosses the hard line of 'it can only re-hash'.
 

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
26,588
Location
Europe
Format
Multi Format
It's possible we agree on many of the main points. At the same time, perhaps not on all; e.g.:
it can't create 'new' it can rehash and remix 'old' but it can't come up with something creative
I'm not so sure on this, particularly in the light of what @Alan Edward Klein said before about human creativity and innovation. If you analyze instances of creativity and innovation, they are not necessarily always (and actually, most of the time they really aren't) the magic sparks we may like to think they are. I sometimes content that virtually all innovation is recombinant. Likewise, creativity is probably in almost all cases not as random and as big a step as we perceive it to be; it's to a large extent small steps in which existing notions are combined. These look like major, discontinuous leaps if you don't account for these pre-existing notions or inputs. But it's not exactly fair to do so in the case of humans and instead consider them a black box, while in the case of AI we want to open that black box and ask which inputs it combines to get to a certain solution. The tricky bit I think is that how AI (LLM's) is presently modeled is to mimic cognitive processes as we understand them in humans, and those include very specifically the processes underlying creative problem solving.

I think AI can be as creative as any person, if you look at the usefulness (a proxy for innovativeness) and discontinuousness (as a measure of radicalness) of the output.
Where there might be very real differences is in which inputs/stimuli effect the process and how these stimuli are weighed. For instance, in humans and particularly when speaking of art, I would expect all manner of emotional and sensory stimuli to play a crucial role. These are innately not present in an LLM and to get similar behavior, we would have to mimic them. It would result in a situation where we would try to make a machine mimic the feeling of a lived human experience.

A philosophically interesting situation, due to this question that looms over everything associated with AI as we presently know it: is exhibiting certain behavior sufficient to draw conclusions about the nature of an entity, or is there something innate that needs to be present that gives rise to this behavior? In other words, if we make a machine that exhibits behavior that we associate with human emotional behavior, is that sufficient to say that the machine is being emotional? Or, put very simply: if it walks like a duck, quacks like a duck, can we safely conclude that it must be a duck? Personally, I don't think so. But we're presently facing a generation of ducks that start to quite easily pass the Turing test, and that brings a secondary question: even if the answer is strictly speaking "no", does it still matter practically speaking? I.e. even if we know it's not a duck, if it does the walk and the quack, and we can reasonably expect it to taste OK if we shoot & cook it, does it still matter how much true 'duckness' it contains? Tricky stuff, that.
 

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
26,588
Location
Europe
Format
Multi Format
We're only a small step away from this...
1767710675580.png
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,005
Format
35mm
It's possible we agree on many of the main points. At the same time, perhaps not on all; e.g.:

I'm not so sure on this, particularly in the light of what @Alan Edward Klein said before about human creativity and innovation. If you analyze instances of creativity and innovation, they are not necessarily always (and actually, most of the time they really aren't) the magic sparks we may like to think they are. I sometimes content that virtually all innovation is recombinant. Likewise, creativity is probably in almost all cases not as random and as big a step as we perceive it to be; it's to a large extent small steps in which existing notions are combined. These look like major, discontinuous leaps if you don't account for these pre-existing notions or inputs. But it's not exactly fair to do so in the case of humans and instead consider them a black box, while in the case of AI we want to open that black box and ask which inputs it combines to get to a certain solution. The tricky bit I think is that how AI (LLM's) is presently modeled is to mimic cognitive processes as we understand them in humans, and those include very specifically the processes underlying creative problem solving.

I think AI can be as creative as any person, if you look at the usefulness (a proxy for innovativeness) and discontinuousness (as a measure of radicalness) of the output.
Where there might be very real differences is in which inputs/stimuli effect the process and how these stimuli are weighed. For instance, in humans and particularly when speaking of art, I would expect all manner of emotional and sensory stimuli to play a crucial role. These are innately not present in an LLM and to get similar behavior, we would have to mimic them. It would result in a situation where we would try to make a machine mimic the feeling of a lived human experience.

A philosophically interesting situation, due to this question that looms over everything associated with AI as we presently know it: is exhibiting certain behavior sufficient to draw conclusions about the nature of an entity, or is there something innate that needs to be present that gives rise to this behavior? In other words, if we make a machine that exhibits behavior that we associate with human emotional behavior, is that sufficient to say that the machine is being emotional? Or, put very simply: if it walks like a duck, quacks like a duck, can we safely conclude that it must be a duck? Personally, I don't think so. But we're presently facing a generation of ducks that start to quite easily pass the Turing test, and that brings a secondary question: even if the answer is strictly speaking "no", does it still matter practically speaking? I.e. even if we know it's not a duck, if it does the walk and the quack, and we can reasonably expect it to taste OK if we shoot & cook it, does it still matter how much true 'duckness' it contains? Tricky stuff, that.

Can you see AI unprompted creating something like a unique Star Wars universe? Or even Middle Earth? While I know how these things come to be, they're built on ideas that come together over time or generations. But I can't really see AI creating a movement. Punk Rock, Grunge etc...it can try but somethings are just cultural osmosis that I can't see a computer grasping despite all its inputs.

A computer doesn't experience the trenches of WWI and write an epic based on that. An AI doesn't go through highschool, heart break, gain and loss that follows us through life. It does have family and friends to color its existence. You wrote before that people like looking at pretty things. AI can make pretty things, but pretty things aren't always art. AI would never write a song like Creep or put out a single like You Know My Name (look up the number). AI doesn't and won't take creative risks. You can tell it to but I don't think you can ever program that into something. The drum machine doesn't make real mistakes. And if it does it's a glitch, if the glitch is exploited then that's human creativity using the tool.

We're only a small step away from this...
View attachment 415162

One of the great lines is when it stomps over the ants. 'Like walking over a bag of crisps'
 

Alan Edward Klein

Member
Allowing Ads
Joined
Aug 29, 2017
Messages
10,135
Location
New Jersey formerly NYC
Format
Multi Format
I think the whole talk about AI not being capable enough is bogus really; I'm with @Sean on this. For the most part, it's already there, and insofar as it isn't, it's a matter of a very short amount of time until it gets there. Seems like presently it's mostly the step that needs to be made from "too perfect" towards "sufficiently wabi-sabi to trick us into believing it's real". That puts us in a position where photo-realistic AI is literally just that, and indistinguishable from the real thing.

Where would that leave us?

Well, let's put aside one important matter - when it comes to art, there's a heck of a lot more than photography. The thread asks about AI being used to produce art in a broad sense, so that inherently includes everything else except photography. This spans forms of art that do not need to mimic anything that already exists. There are entirely new avenues that are open for exploration. What's limiting us mostly at this point is the limits to our own imagination and ingenuity. I follow AI 'art' generation with a slanted eye and for the most part, it really is uncreative slop - renders of barely clothed women in violent "Lethal Weapon, Kill Bill" kind of situations (what the heck is wrong with people). Or when it's a little more 'out there', it remains stuck at rehashes of Star Wars and Dune type of imagery of fantasy landscapes or space stations. Mind you, this is all being done with exceeding technical prowess and the images as such are pretty mindboggling/stunning. Particularly creative or artistically compelling they are not, and that's not due to the use of AI. It's due to the fact that most of this 'successful' imagery is made basically by nerds with an interest in IT, and not by artist. Wait until capable hands (minds) start wielding this technology. We're in for a ride, for sure.

The more important issue I'd like to put to the fore is the in my view kind of hilarious argument about AI being 'not good enough' as a means to disqualify it from the domain of the arts. That's not the point, at all, in my view. That AI can be used to make art is not even up for debate in my opinion. See above; the fact that for now, it's mostly relatively uncreative minds using it and this resulting in fairly unartistic slop (including the 'crazy realism' video above) does not mean the potential isn't there. It's like arguing that cars are useless because once in a while someone drives into a tree with one. It's a logical fallacy to discredit the technology on the basis of a non-exhaustive set of counter-examples. Black swans don't exist until the day you see one, and all that. There's no doubt in my mind we'll be seeing plenty of black swans soon enough.

The real question is whether or not we'll accept that art as actual art. And I think, it depends here on the degree to which we recognize a human hand in it. In the end, I think the question what is or isn't art boils down to what @thinkbrown said very early on in the thread; specifically this bit (I do not necessarily agree with what followed):

This limits the scope of what art is to work produced with involvement and arguably control by the human mind. I think in the end, that's really going to be the dividing line. And that automatically means that imagery (or sound, or whatever) that's made essentially by a digital agent with no direct control of a person over the outcome will never be accepted as art by a broader audience - with the 'audience' being defined as people who have an interest in art to begin with. That's an important caveat, because the majority of mankind in my opinion has no real interest in art. People like pretty things, overall, but that's a different matter.

The underlying question is why we would draw a dividing line between 'art' and 'not art' on the basis of human control. In my mind, that's simple - we want to be able to relate to it, that's all. We can't relate to a datacenter any more (and in fact, much less so, on average) than to a pet cat. It's just too different a beast. When it comes to Ai-generated work, one of the key problems is that a datacenter is just too...good. It yields a perfect result, even if we require it to be imperfect - then it'll be perfect in its imperfection. For whatever underlying psychological or evolutionary reason, we can't really cope with that as humans. Probably it's just too threatening in the end and we can't cope with that in the way we usually do.

Think about it - why do people read tabloids? For that matter, why do people read biographies of people like Einstein or Musk? A large part of it is to be able to spot all the imperfections, vices, character faults, misbehaviors and flaws exhibited by the rich and famous. Those people we all somehow measure up against, who are the benchmark of success in one way or another - one of the main ways of dealing with them and the notion that most of us will never really succeed in life the way these happy few have done (that's why they're few, after all), is to focus not just on their success, but also on their failure. Einstein married his niece - LOLWUT! And didn't that video of Musk smoking a joint during a podcast recording go viral? Sex! Drugs! Human weakness!

It's a lame piece of Hollywood trash in many ways, but in some ways, Bicentennial Man is an interesting movie. At what point does Mr Robot get accepted (American-style: in a legal sense) as a human being? That's right - the moment he 'failed' in a human way: by (arguably irrationally) opting for mortality.

It shows in this thread as well. The moment the notion of Ai autonomously creating art arises, the response is essentially "it can't be, it's not supposed to be!" Arguments are all over the place - it's not good enough, it's too good, it can't think, it can't feel etc. In short - it's not human enough. It's insufficiently like me, and if it's not like me, I won't accept it. In the end, that's the criterion, the way I see it. And I think there's nothing wrong with that. If anything, AI is going to help us understand a little better what "being human" involves. Maybe that's the greatest 'innovation' it'll bring.

How would a viewer know who or what created it? If the viewer thinks it's art, it's art.
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,005
Format
35mm
Are you sure? 😃
It just had to listen to the Hollies - The Air that I Breathe

I pointed out the song specifically due to the guitar break. No program is going to willfully do that.

As I argued before, I think we need to distinguish between our own imagination and what tools can generate (with or without some help). The answer to the 'can you see...' question is limited primarily by my own imagination.

Programs no matter how sophisticated can only do what's already been done, I still think that humans have a lock on creativity. If you put AI in a limited environment it will be limited by that environment. It's not going to yearn for something more or need stimulation. Human nature is to seek stimulation. An AI has no drive, no reason of being.

What do you mean "unprompted"?
Since we are babies we had always received prompts

AI won't ask for more bananas, a baby will if baby wants more bananas.
 

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
26,588
Location
Europe
Format
Multi Format
Programs no matter how sophisticated can only do what's already been done
Not strictly speaking. It's evident that LLM's as they are now are producing things that are different from what they've been trained on. They produce on a certain basis, and they don't strictly reproduce. As to your examples - note that works like LOTR and Star Wars, formidably successful as they have proven to be, are strictly speaking recombinant and incremental innovations. Both can e.g. be placed in storytelling traditions that precede both works for decades or even centuries (and longer). So yes, I'm convinced that AI as such is or in due course will be capable of producing the same type of recombinant, incremental innovation.

Mind you, recombinant and incremental innovations can be phenomenally successful, and that success can result in dramatic changes to social and economic structures. We have plenty of examples to choose from. But if they are studied closely, they virtually never seem to be technically discontinuous - even though they're touted to be, even in academic circles. They are discontinuous only if you look at them from sufficient distance and without accounting for the many precedents there have been.

No program is going to willfully do that.
I think "willfull" is a problematic term in this context. If you mean to say that "no program is capable of it" - well, I regard that as an axiomatic statement. We'll see, but I think the odds are against your statement.

An AI has no drive, no reason of being.
We drive it. It exists because we built and continue to build it.
Looking at your argument from the other side, the human side: humans strictly speaking also have no reason of being. For all we know, we popped up as a happy little accident in a massively complex cosmic dance. And we'll disappear in that same dance, coincidentally. As to drive - it's an evolutionary tautology. A species without the will to live gets eaten, unless some other species keeps it propped up for its own interest. We can understand AI (and all other man-made technology, as well as most domesticated animals) in similar terms.
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,005
Format
35mm
Not strictly speaking. It's evident that LLM's as they are now are producing things that are different from what they've been trained on. They produce on a certain basis, and they don't strictly reproduce. As to your examples - note that works like LOTR and Star Wars, formidably successful as they have proven to be, are strictly speaking recombinant and incremental innovations. Both can e.g. be placed in storytelling traditions that precede both works for decades or even centuries (and longer). So yes, I'm convinced that AI as such is or in due course will be capable of producing the same type of recombinant, incremental innovation.

Mind you, recombinant and incremental innovations can be phenomenally successful, and that success can result in dramatic changes to social and economic structures. We have plenty of examples to choose from. But if they are studied closely, they virtually never seem to be technically discontinuous - even though they're touted to be, even in academic circles. They are discontinuous only if you look at them from sufficient distance and without accounting for the many precedents there have been.


I think "willfull" is a problematic term in this context. If you mean to say that "no program is capable of it" - well, I regard that as an axiomatic statement. We'll see, but I think the odds are against your statement.


We drive it. It exists because we built and continue to build it.
Looking at your argument from the other side, the human side: humans strictly speaking also have no reason of being. For all we know, we popped up as a happy little accident in a massively complex cosmic dance. And we'll disappear in that same dance, coincidentally. As to drive - it's an evolutionary tautology. A species without the will to live gets eaten, unless some other species keeps it propped up for its own interest. We can understand AI (and all other man-made technology, as well as most domesticated animals) in similar terms.

What amounts to your view is 'wait and see' while I retain a more empirical skeptical view of 'if it happens'

I guess when the Roombas come for us I'll be rounded up with the other Humanists as I'll reject our robot overlords.
 

Pieter12

Member
Allowing Ads
Joined
Aug 20, 2017
Messages
8,132
Location
Magrathean's computer
Format
Super8
Since AI seems to be programmed to learn from what it is fed or finds on the internet, I find it difficult to believe it can really be creative. After all, much creative work is reactionary rather than evolutionary. In art and photography, many movements came about as artists wanted to distance themselves from the status quo, Impressionism, f64, New Topographics, all breaking from tradition at the time. Can AI do that? Even prompted to, where would it turn for source material or even inspiration (if that is even possible).
 

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
26,588
Location
Europe
Format
Multi Format
Since AI seems to be programmed to learn from what it is fed or finds on the internet, I find it difficult to believe it can really be creative. After all, much creative work is reactionary rather than evolutionary. In art and photography, many movements came about as artists wanted to distance themselves from the status quo, Impressionism, f64, New Topographics, all breaking from tradition at the time. Can AI do that? Even prompted to, where would it turn for source material or even inspiration (if that is even possible).
I think part of it is the lack of our own imagination that also resounds in your response; you "find it difficult to believe" - well, there's a lot of stuff that happens in the cosmos that's difficult to believe. It doesn't stop DNA from replicating - one of those things that's so fragile and at the same time so robust that many people just downright refuse to believe that it wasn't purposefully designed by some entity (now that I personally find difficult to believe).

Another part is what I addressed above; what you point towards with the 'breaking tradition' is the discontinuous nature of social (and economic) developments involved in the emergence of something seemingly new. But the actual developments need not be so discontinuous at all. If you look at the movements you mention, the seminal works and artists didn't just drop out of thin air. They are deeply rooted in traditions that can be recognized. But we like to think that they were radical, or perhaps we dislike doing the legwork of figuring out where they took inspiration from. I think it's a bit of both.

I will grant you this: the movements you mention were indeed reactionary in the sense that they had an agenda to push. So far, we don't see AI behaving as a truly sentient entity in the sense that it starts to define its own interest and then attempt to serve it. I'm not sure how likely that's to happen, and whether we'll allow it, if push comes to shove. Neither am I convinced that it's crucial in answering all questions about the relationship between AI and art. Personally, I view that argument (which I might haphazardly summarize as "AI can't push its own agenda and therefore it can't produce art") as one more variant of the "AI can't do X, so it's bunk" line of argumentation. We see plenty of variants of it and I think they are much more about people pushing their own agendas than about trying to truly understand what is going on and what might happen next.

Now, one thing is blatantly obvious to me - there will be a reactionary arts movement that will use AI as part of its toolkit to try and push their agenda and make their mark upon society. Because...why the heck not? With 8 billion of us running around, every tool will be used for every imaginable purpose sooner or later, and given enough brute force and stubbornness, it succeeds in a surprisingly large number of instances.
 
OP
OP

nikos79

Member
Allowing Ads
Joined
Mar 9, 2025
Messages
1,039
Location
Lausanne
Format
35mm
Since AI seems to be programmed to learn from what it is fed or finds on the internet, I find it difficult to believe it can really be creative. After all, much creative work is reactionary rather than evolutionary. In art and photography, many movements came about as artists wanted to distance themselves from the status quo, Impressionism, f64, New Topographics, all breaking from tradition at the time. Can AI do that? Even prompted to, where would it turn for source material or even inspiration (if that is even possible).

This is very wrong. The new generation is recursively fed, so they learn from their errors.
Think of as if they produce a lot of answers (along wrong ones) which are then becoming part of their knowledge.
This process have shown much more creativity than original ones.
And is close to evolution-self learn
 

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
26,588
Location
Europe
Format
Multi Format
I'd like to make clear, as I've done before in other threads, that my position in this is very firmly not that of some kind of AI advocate. I try to understand what I see happening, and in doing so I rely on what I've learned so far on various things, including innovation, which has been a major element in my career. I try to keep this attempt at understanding as free of normative judgement as humanly possible (which is probably limited).

Feyerabend had it right:
The only principle that does not inhibit progress is: anything goes.
This is not a ‘principle’ I hold. It is the terrified exclamation of a rationalist who takes a closer look at history.
At the same time, I suspect that whenever someone says "anything goes" or something along those lines, it's too easily misunderstood to be intended as a guiding principle. Let me be very clear - it is not, in my case.
 

Pieter12

Member
Allowing Ads
Joined
Aug 20, 2017
Messages
8,132
Location
Magrathean's computer
Format
Super8
well, there's a lot of stuff that happens in the cosmos that's difficult to believe.
Don't confuse difficult to believe with difficult to understand.
This is very wrong. The new generation is recursively fed, so they learn from their errors.
Learning from errors does not necessarily mean it can make leaps in thinking. Can AI have a "eureka moment?" https://connectsci.au/news/news-par...s-can-be-predicted-before-they?searchresult=1
 

Alan Edward Klein

Member
Allowing Ads
Joined
Aug 29, 2017
Messages
10,135
Location
New Jersey formerly NYC
Format
Multi Format
Getting a bit off topic, but one of these sure could come in handy to lug gear for you 😁



Walks like an old man.
 

Alan Edward Klein

Member
Allowing Ads
Joined
Aug 29, 2017
Messages
10,135
Location
New Jersey formerly NYC
Format
Multi Format
What amounts to your view is 'wait and see' while I retain a more empirical skeptical view of 'if it happens'

I guess when the Roombas come for us I'll be rounded up with the other Humanists as I'll reject our robot overlords.

Roomba company has gone bankrupt. AI might be next.
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom