• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

Artificial intelligence.

Nuns in the Refectory

A
Nuns in the Refectory

  • 5
  • 1
  • 25
Peterbilt Super Dump

H
Peterbilt Super Dump

  • 1
  • 0
  • 31

Recent Classifieds

Forum statistics

Threads
202,236
Messages
2,837,718
Members
101,218
Latest member
ElPadre
Recent bookmarks
0

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,050
Format
35mm
I've turned somewhat on my AI position in the past few months. As it stands with art of all kinds it's a passing fad. It's an assist tool that is shiny and new now and used to crank out slop. Sooner or later the next thing will come along. As a practical tool it's priceless. I've been using it to track my calorie intake and exercise. It's honestly been a life changer. So when the generation above me says things like 'Bah! AI is trash' I point out that the AI that you are sharing daily is trash. And if that's all the exposure you have to AI then you'll believe its all trash.

On my own personal level I think the abundance of AI slop has made me feel much better and secure about my personal output. My work is now infinitely more valuable and unique. Anyone can make something with AI, but I create with my own intellect and effect. This is a major self-worth boost. It doesn't matter what trash I put out as long as it's human made it has more worth.

Human made will be the next 'hand made' or 'hand crafted'
 

Pieter12

Member
Allowing Ads
Joined
Aug 20, 2017
Messages
8,299
Location
Magrathean's computer
Format
Super8
The physical integrity of film made by a human being is shared by the physical integrity of oil on canvass made by human beings, or pencil on paper, or watercolour, or acrylic. need I go on . AI is make believe.
The only reality of oil on canvas, etc. is the materials. The rest is make believe. So is photography, if you think about it.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
55,508
Location
Delta, BC Canada
Format
Medium Format
It can be difficult to separate concern about the general consequences of the rapid increase in AI in society in general, with concern about how AI is or might affect issues concerning photography.
 

cliveh

Subscriber
Allowing Ads
Joined
Oct 9, 2010
Messages
7,831
Format
35mm RF
The only reality of oil on canvas, etc. is the materials. The rest is make believe. So is photography, if you think about it.

No it is not, because the make believe is from the human mind, not AI.
 

wiltw

Subscriber
Allowing Ads
Joined
Oct 4, 2008
Messages
6,739
Location
SF Bay area
Format
Multi Format
Will AI make real photography websites such as this irrelevant ?
The internet is flooded with manipulated images that go way above the average film or digital photographer.
Will we have cameras that have an AI mode?
The future does not look good.

Will anyone ever accept an AI-generated 'photo' which depicts (and does not actually record a real event as seen by any eye-witnesss) a real activity as representing the reality which occurred?! IOW will anyone ever think such a depiction is 'real'? If not (and I think no one ever would accept such a photo), AI will never fully replace the camera recording digitally or on film. An AI-controlled robot might take the place of a living breathing photojournalist. An AI-generated image will not always substitute adequately for the recording of an actual event.

Would you accept this as representing the parade recently held on the streets of Washington DC, in a news article in your newspaper or Time magazine?
078ae422-cfa2-4696-8527-efd72b469d9e.jpg

This looks like troops from a foreign country are parading in front of the White House, with a non-commissioned officer wearing the hat of a commissioned officer!
 
Last edited:

Pieter12

Member
Allowing Ads
Joined
Aug 20, 2017
Messages
8,299
Location
Magrathean's computer
Format
Super8
Already clients have generated new images from existing photo shoots when they don’t get what they want or don’t want to pay for more. These are pretty much indistinguishable from the real thing.
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,050
Format
35mm
Will anyone ever accept an AI-generated 'photo' which depicts (and does not actually record a real event as seen by any eye-witnesss) a real activity as representing the reality which occurred?! IOW will anyone ever think such a depiction is 'real'? If not (and I think no one ever would accept such a photo), AI will never fully replace the camera recording digitally or on film. An AI-controlled robot might take the place of a living breathing photojournalist. An AI-generated image will not always substitute adequately for the recording of an actual event.

Would you accept this as representing the parade recently held on the streets of Washington DC, in a news article in your newspaper or Time magazine?
078ae422-cfa2-4696-8527-efd72b469d9e.jpg

This looks like troops from a foreign country are parading in front of the White House, with a non-commissioned officer wearing the hat of a commissioned officer!

You could spend time breaking down the photo but one thing I've noticed with AI images is it LOVES razor thin depth of field. It looks at all the popular professional photos and sees that subject separation is very prevalent. So the AI assumes that all good photos need subject separation even when it makes no sense photographically.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
55,508
Location
Delta, BC Canada
Format
Medium Format
You could spend time breaking down the photo but one thing I've noticed with AI images is it LOVES razor thin depth of field. It looks at all the popular professional photos and sees that subject separation is very prevalent. So the AI assumes that all good photos need subject separation even when it makes no sense photographically.

That equating of the "popular" with "good" predates AI.
AI speeds up the recognition/determination of popular, which is of dubious value.
AI speeds up many things that do gain benefit from the speed.
Like many other important changes, the trick will be in learning where and how to use the capabilities.
 

mshchem

Subscriber
Allowing Ads
Joined
Nov 26, 2007
Messages
16,290
Location
Iowa City, Iowa USA
Format
Medium Format
I sure miss the "a picture is worth a thousand words" days. But that's a very long time ago. AI for generating fear and confusion, that's some scary sh*t!
 

koraks

Moderator
Moderator
Allowing Ads
Joined
Nov 29, 2018
Messages
27,669
Location
Europe
Format
Multi Format
one thing I've noticed with AI images is it LOVES razor thin depth of field
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.

As it stands with art of all kinds it's a passing fad. It's an assist tool that is shiny and new now and used to crank out slop. Sooner or later the next thing will come along.
Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.

I sure miss the "a picture is worth a thousand words" days
That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.

The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.
 

Alan Edward Klein

Member
Allowing Ads
Joined
Aug 29, 2017
Messages
10,207
Location
New Jersey formerly NYC
Format
Multi Format
Why would that make any difference?

How do you bracket exposure with a flash that needs time to be recharged? Or do the new flashes not need time?
 

Alan Edward Klein

Member
Allowing Ads
Joined
Aug 29, 2017
Messages
10,207
Location
New Jersey formerly NYC
Format
Multi Format
I've already posted some Gemini AI answers to technical questions about photography here in this forum. It's very handy and I find it's not a contradiction to photography, just a tool. Regarding AI creation of photographs, I'm opposed, although using it to edit real photos within limits is ok.
 

Alan Edward Klein

Member
Allowing Ads
Joined
Aug 29, 2017
Messages
10,207
Location
New Jersey formerly NYC
Format
Multi Format
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.


Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.


That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.

The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.

Your post was too long to read, so I asked Gemini AI to summarize it for me. Here's what it said. Does that match what your post said? How did it do?

Gemini AI:

A forum discussion explores the evolving relationship between text, imagery, and AI, noting that while current AI imagery is generated from short prompts, resulting "information richness" is borrowed from existing data. The author speculates that future models handling larger inputs will produce imagery of a complexity beyond human comprehension. The text argues that criticizing AI for being "unimaginative" is ironic, as human evolution has prioritized replication over pure innovation, and attributes resistance to AI to a "panic response" from fear of uncontrollable, non-linear change. Another user views AI as a useful tool for technical queries and limited photo editing but opposes purely AI-generated "photographs." More details are available in the forum discussion.
 

nikos79

Member
Allowing Ads
Joined
Mar 9, 2025
Messages
1,108
Location
Lausanne
Format
35mm
Nearly everything AI now facilitates in Lightroom or other photo editing tools has its roots in the darkroom, where experienced and skilled printers shaped images to their will through exceptional craft. It was just more time consuming so not many bothered to do these changes in the past. Seen that way, today’s tools feel less like a rupture and more like a continuation. So I don’t find it that alarming/dramatic
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,050
Format
35mm
That equating of the "popular" with "good" predates AI.
AI speeds up the recognition/determination of popular, which is of dubious value.
AI speeds up many things that do gain benefit from the speed.
Like many other important changes, the trick will be in learning where and how to use the capabilities.

AI doesn't have taste. Then again, most people are pretty tacky when it comes down to it. Me being one of them when I'm honest with myself.

"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.


Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.


That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.

The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.

I'd say closer to CGI in movies. People were floored by it when it started being used and then quickly tired of it when it was over used. Eventually it got so good that even if it was identified that it was obviously CGI it was ignored. People are still mad at George Lucas for that early 2000's edition of Star Wars where he CGIed over the original special effects.

Also Star Wars isn't for nerds. Star Trek is.
 

mshchem

Subscriber
Allowing Ads
Joined
Nov 26, 2007
Messages
16,290
Location
Iowa City, Iowa USA
Format
Medium Format
"It" doesn't like anything. This is just a generation mode that's used a lot. These models aren't limited to shallow DoF images. There's plenty of "f/64" work being made as well. What we're mostly seeing right now is stylistic poverty. If you give a thousand random people oil paints, what do you reckon you'll get? That's right - a sticky mess. Not a whole lot that's worth looking at. With AI, the issue is that since it's a gadget (essentially), the people who warm up to it presently at a large scale are people who like gadgets. And while some people with a fondness for gadgets have artistic talent, the vast majority are just like any other people - of moderate talent, at best. So what they generate is repetitive and unimaginative. I've said it before - I've been following AI 'art' in one place for a while and the picture styles boil down to (1) anime, (2) star wars-kind of sci fi scenes, (3) imagery of violent women. It's no surprise, because these people are essentially geeks (I don't mean that in a derogatory way, but to describe their general interest in technology as well as certain genres of media) and they end up creating what they consume. In that sense, these people aren't too different from AI, and they suffer from the same limitation: much like AI, most people find it very hard to imagine anything that deviates strongly from what they already know.


Nah, I don't think so, really. I mean, perhaps we will at some point raise the bar a little for calling something 'art'. That would be nice. But I'm convinced that insightful, creative, socially-engaged art will be created sooner or later with AI. Think about it; it's just like photography. Sooner or later someone comes along and does something with the new medium that blows everybody's mind. Then of course most people get angry and reject it, because it's not what they're used to. But bit by bit, the new 'fad' gains following and evolves, and ultimately gains a wider acceptance. How long did it take for photography to be accepted as an art form at a similar level as painting etc.? Arguably, it's not even there yet, and it's been 150+ years in the making. AI-generated imagery has been around in substantial quantities for 5 years or so, if that, even. Doesn't make a whole lot of sense to declare it dead; we're not even scratching the surface on this thing. Remember (if you're old enough) how some businesses in the mid-1990s said that the whole online thing would blow over and why would they invest in a website; it would wear off. Then they were all proved "right" with the dotcom bubble in 2000. Yeah, we all know how that went down, ultimately.


That's an interesting remark as there's a lot going on if you start to untangle it, I think.
First thing that comes to mind is that it takes only a limited number of words to generate most of the Ai imagery we're seeing currently; let's say a couple of dozen at the most. That's a far cry from "a thousand", but this direct relationship between words (as input) and an image (as output) does provide us with an additional insight into the relation between text and imagery in terms of information richness. Now, if you were to generate an AI image with a prompt of let's say 20 words, it will likely take a whole lot more words to describe the end result. Approaching this very simplistically, you can ask where the additional information content came from. The evident answer is that AI "made it up", or "borrowed" it from other sources (or, more accurately: something in-between those concepts). That 'borrowing/make-up' is not necessarily directed; it'll be in line with the actual prompt that was given and correlate (in terms of existing/known data) with this prompt. And that makes it inherently "unimaginative". Then again, if you look at lots of (bona fide) art, there's also stylistic congruence within bodies of work that could (if you're critical) be regarded as some form of replication.
Extending this line of thought - what happens if we dramatically increase the number of words a prompt is constructed from? Present-day LLM's are relatively crude and limited due to the nature of the computations involved and the way we're presently implementing them. It's a very, very big crow bar to open a very small box. One of the inevitable lines of progress will be that we'll build (1) even bigger crowbars at lower cost and (2) we will undoubtedly at some point figure out other tools besides crowbars to do the same job. Either way, these generative models will become capable of handing vastly larger inputs, and at that point it will become possible to turn, say, a short story or a novel into an image. What happens if you work 10k words into a single image - will it actually become 'worth' 10k words? Probably not always, but there's a decent possibility we're going to see imagery arise of a complexity and nuancedness that is difficult for us humans to comprehend, let alone make.

The main problem here remains that thinking about AI, many of us remain stuck in two modes that also interact:
(1) our imagination is limited and we're trying to linearly predict the progress of something that will develop non-linearly. I.e. how society will evolve under influence of AI (and vice versa) is simply too complex to predict, and as a result, how it'll unfold will be by definition unimaginable - it will catch all of us by surprised, and:
(2) change is pretty damn scary for most people (I'd say all, eventually) so our natural response to change that's also uncontrollable is rejection.
Especially #2 troubles the waters (hehe, see what I did there...), sometimes to a point that someone gets so riled up by just thinking about all this is enough to make them storm out in a flurry while banging the door behind them (see one page above). It's a panic response, which I think is illustrative for the fear instilled by radical change - and I think that's a very human response (although not necessarily a very productive one).
Also, #1 I always find rather ironic, because one of the problems people often have with AI is that it's unimaginative. But then when I look at the vast majority of work, solutions, inventions etc. that the vast number of humans come up with - guess what, they're also not super imaginative. So what? We've evolved to come up with solutions that work well enough to survive to the next generation. It's only natural that we do, like, 99.9% tried-and-tested stuff and only 0.1% new. We were not trained to be creative; we were trained to replicate. Now, we've made a tool that basically does the same thing adn then people all throw a hissy-fit that it's 'unimaginative'. Honestly, that never fails to make me chuckle.

Too many words. I'm worried about the masses being led to war and riot by humans creating false images and narratives. Like the bloated tub of crap, not sure who, in the open carriage. 🤔
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
55,508
Location
Delta, BC Canada
Format
Medium Format
Also Star Wars isn't for nerds. Star Trek is.

"Nerd or do not Nerd. There is no try." :whistling:
Nerds inhabit both imagined universes, in my experience.
 

Cholentpot

Member
Joined
Oct 26, 2015
Messages
7,050
Format
35mm
"Nerd or do not Nerd. There is no try." :whistling:
Nerds inhabit both imagined universes, in my experience.

Also pointing fingers when we're on a film shooting forum. Talk about uber ultra nerds. Every single person on this website is a certified geekaziod nerdroid. Pointing fingers at pop culture fans? We should all look through our crusty 50 year old viewfinders into the mirror and take a long look.
 

Pieter12

Member
Allowing Ads
Joined
Aug 20, 2017
Messages
8,299
Location
Magrathean's computer
Format
Super8
AI doesn't have taste. Then again, most people are pretty tacky when it comes down to it. Me being one of them when I'm honest with myself.



I'd say closer to CGI in movies. People were floored by it when it started being used and then quickly tired of it when it was over used. Eventually it got so good that even if it was identified that it was obviously CGI it was ignored. People are still mad at George Lucas for that early 2000's edition of Star Wars where he CGIed over the original special effects.

Also Star Wars isn't for nerds. Star Trek is.
Star Wars is for comic book fans.
 

koraks

Moderator
Moderator
Allowing Ads
Joined
Nov 29, 2018
Messages
27,669
Location
Europe
Format
Multi Format
Also Star Wars isn't for nerds. Star Trek is.

Hehe!
IDK about the CGI parallel. While CGi is evidently pervasive in the graphic arts these days, including cinema, the whole scope of AI is evidently much broader. But I take it you refer to the image-generating subset of AI, in which case it may be something like that. Then again, I suspect there will be quite fundamental differences still. E.g., with CGI you could argue that it is now used to produce feature length movies (the Toy Story genesis thing), and that changed the layout of the industry somewhat. However, I expect that Ai will eventually bring production costs so far down that such productions will come within reach of private individuals. This will change the landscape of cinema dramatically. It's not going to kill Hollywood or movies as we know them now. It's a bit like how YouTube combined with the smartphone and other accessible digital video recording devices added to the media landscape.
 

BrianShaw

Member
Allowing Ads
Joined
Nov 30, 2005
Messages
17,044
Location
La-la-land
Format
Multi Format
I skimmed through the summary and it's indeed that - it touches upon a few of the key points I made, but in doing so didn't achieve completeness or captured all of the essence.

"It" is being defensive and deliberatly choosing to ignore the parts of your tome that "it" does not agree. :smile:
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom