- Joined
- Oct 26, 2015
- Messages
- 7,102
- Format
- 35mm
The only reality of oil on canvas, etc. is the materials. The rest is make believe. So is photography, if you think about it.The physical integrity of film made by a human being is shared by the physical integrity of oil on canvass made by human beings, or pencil on paper, or watercolour, or acrylic. need I go on . AI is make believe.
The only reality of oil on canvas, etc. is the materials. The rest is make believe. So is photography, if you think about it.
The only reality of oil on canvas, etc. is the materials. The rest is make believe. So is photography, if you think about it.
Sure, but it's still not real.No it is not, because the make believe is from the human mind, not AI.
Will AI make real photography websites such as this irrelevant ?
The internet is flooded with manipulated images that go way above the average film or digital photographer.
Will we have cameras that have an AI mode?
The future does not look good.
Will anyone ever accept an AI-generated 'photo' which depicts (and does not actually record a real event as seen by any eye-witnesss) a real activity as representing the reality which occurred?! IOW will anyone ever think such a depiction is 'real'? If not (and I think no one ever would accept such a photo), AI will never fully replace the camera recording digitally or on film. An AI-controlled robot might take the place of a living breathing photojournalist. An AI-generated image will not always substitute adequately for the recording of an actual event.
Would you accept this as representing the parade recently held on the streets of Washington DC, in a news article in your newspaper or Time magazine?
This looks like troops from a foreign country are parading in front of the White House, with a non-commissioned officer wearing the hat of a commissioned officer!
You could spend time breaking down the photo but one thing I've noticed with AI images is it LOVES razor thin depth of field. It looks at all the popular professional photos and sees that subject separation is very prevalent. So the AI assumes that all good photos need subject separation even when it makes no sense photographically.
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.one thing I've noticed with AI images is it LOVES razor thin depth of field
Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.As it stands with art of all kinds it's a passing fad. It's an assist tool that is shiny and new now and used to crank out slop. Sooner or later the next thing will come along.
That's an interesting remark as there's a lot going on if you start to untangle it, I think.I sure miss the "a picture is worth a thousand words" days
Why would that make any difference?
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.
Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.
That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.
The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.
How did it do?
That equating of the "popular" with "good" predates AI.
AI speeds up the recognition/determination of popular, which is of dubious value.
AI speeds up many things that do gain benefit from the speed.
Like many other important changes, the trick will be in learning where and how to use the capabilities.
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.
Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.
That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.
The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.
Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.
That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.
The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.
Also Star Wars isn't for nerds. Star Trek is.
"Nerd or do not Nerd. There is no try."
Nerds inhabit both imagined universes, in my experience.
If you were to describe a picture with a thousand words to AI, you'd probably get a pretty good one.I sure miss the "a picture is worth a thousand words" days. But that's a very long time ago. AI for generating fear and confusion, that's some scary sh*t!
Star Wars is for comic book fans.AI doesn't have taste. Then again, most people are pretty tacky when it comes down to it. Me being one of them when I'm honest with myself.
I'd say closer to CGI in movies. People were floored by it when it started being used and then quickly tired of it when it was over used. Eventually it got so good that even if it was identified that it was obviously CGI it was ignored. People are still mad at George Lucas for that early 2000's edition of Star Wars where he CGIed over the original special effects.
Also Star Wars isn't for nerds. Star Trek is.
Also Star Wars isn't for nerds. Star Trek is.
I skimmed through the summary and it's indeed that - it touches upon a few of the key points I made, but in doing so didn't achieve completeness or captured all of the essence.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?