• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

Artificial intelligence.

Valencia

A
Valencia

  • 0
  • 0
  • 45
Tied to the dock

D
Tied to the dock

  • 4
  • 0
  • 89

Recent Classifieds

Forum statistics

Threads
203,092
Messages
2,849,721
Members
101,659
Latest member
Dirksen
Recent bookmarks
1
That is not what it is limited to. It is used, to great benefit, to enhance or modify existing images. Nothing fake beyond what had been traditionally been done by retouching--only better.

I agree with you that AI for edits is OK. But AI to create new images isn't a photograph but rather a computer-generated graphic that looks like a photo.
 
This would have sounded insane a year ago, now it is actually not so far fetched:

"Coding dies this year. Not evolves. Dies. By December, AI won’t need programming languages. It generates machine code directly. Binary optimized beyond anything human logic could produce. No translation. No compilation. Just pure execution."

An example would be you no longer code apps, you tell the ai what you want and it renders it. Interaction within the inner and outer working of the app are seen and processed by the ai which then renders any required output. No code.

Having repaired computers in the late 1960's, I am familiar with programming in machine language. Each advancement in programming language as Assembly, Cobol, Fortran, C, Java, etc, just makes it easier. Even with AI, you;re going to have to know how to best prompt the machine. The prompts are just a new form of language.
 
We think of the negative impacts of AI, but there are also the positive results. I only recently learned about two uses of AI that are beneifiting mankind
  • AI analyzes chemical compounds and suggest (much more quickly) similar compounds that should be investigated to resolve medical issues, faster than human scientists can come up with potential solutions.
  • AI analyzes existing chemical compounds with one medical use, and suggests alternate medical uses
I am well aware of some of the criminal uses of AI, and my wife has a good friend who was victimized by what AI can do. losing thousands of dollars to fraud made possible by AI. But there are benefits from AI, too.

I just asked Google Gemini a question, and the answers it gave me showed it accessed all of my Google calendar appointments to adjust its answer. I didn't know I gave authority to Google to do this. How do you shut them down?
 
I agree with you that AI for edits is OK. But AI to create new images isn't a photograph but rather a computer-generated graphic that looks like a photo.

AI does not generate new images from scratch, it takes elements of existing ones and manipulates them. That is why so much of it looks like anime and over-retouched faces. That is what it sees on the internet.
 
it takes elements of existing ones and manipulates them
Not as such, though. It generates images from noise, compares them to mathematical abstractions of images it has been fed with, and adjusts the noise generation based on the similarity. This process is moderated by the tokens given to it in the form of a prompt. Very simplistically put. But it's not the same as 'taking elements from'; it's not like a collage. I think that's an important difference, although many people won't care or even realize.
 
Not as such, though. It generates images from noise, compares them to mathematical abstractions of images it has been fed with, and adjusts the noise generation based on the similarity. This process is moderated by the tokens given to it in the form of a prompt. Very simplistically put. But it's not the same as 'taking elements from'; it's not like a collage. I think that's an important difference, although many people won't care or even realize.
It may not literally take the physical elements but it bases the forms from existing images. Oh, maybe that is what photography does, too!
 
I used "makes the rule" instead of "proves the rule" to avoid being too dogmatic. :smile:

The point being, the "makes the rule" actually reverses the meaning. Either "tests" or "challenges" or even "restricts" would be better substitutes.
 
It may not literally take the physical elements but it bases the forms from existing images. Oh, maybe that is what photography does, too!

A distinction without a difference. The fact is, it's manipulating bits in the computer. It's not taking a photograph of reality by capturing photons.
 
A distinction without a difference.
To you, perhaps. It's a big difference in many ways. One of them is legal, which is why we see people get riled up about IPR theft but the legal case to be made continues to be very weak, unless we adjust our legal frameworks to become very flexible in this regard - which would have major repercussions for the rest of society.

I think there's also a philosophical and metaphysical difference that is relevant if you talk about art (or imagery with artistic intent) in particular. It does make a difference if the pixels are all brand new or whether they are recycled. It's a different process and that carries meaning into the end result.

Then there's the fundamental technical difference that has implications not just for the makers of the technology, but also its users. All the problems we've seen last year (which resolved remarkably quickly) with Chernobyl limbs track down directly to this difference. Even with the presently commercialized model generations, this difference is relevant. A simple, real-life example: around Christmas and New Year, we've had quite a few people send us or share with us (digital) snaps of themselves/their families around Christmas trees etc. We've received/seen several where people AI-ed themselves into a more elaborate decor than they had access to - basically, "take this photo of us and put us in a cozy room with a fireplace and an elaborately decorated tree". The result is invariably that you get the people from the photo in the end result, except that they look slightly different...slightly....off. That's because the original photo is not actually duplicated into the end result - it's regenerated to look as close as possible to the original, but with the desired elements added/changed. It's just a simple practical example of how the fundamental difference can indeed have practical results ("wow, your sister looks creepy in this photo; didn't she notice this herself?")
 
While it may ultimately transform our lives, the current (2026) AI scene feels like a bubble.

https://gizmodo.com/top-chinese-chi...center-buildout-plan-is-half-baked-2000720567

The problem with statements like those that the article rely on need to be seen within the context of the respondent's strategic position within the global, industrial arena. Presently, all signs are pretty clear to me:
* virtually everybody is experimenting with this tech
* we see very, very few people who are so dissatisfied with it as to walk away from it
* there are plenty of critics but they (1) either keep using the tech for things they do perceive useful or (2) they've not really started using it and are reflecting on their expectations, not real experiences
* virtually all organizations are 'playing with' AI in a process of coming to grips with it (see e.g. this McKinsey update). Note that this is really different from the dotcom bubble when everybody was supposed to be busy with it, but at best was talking about it.

What the article ignores for some reason is that even to meet demand if it develops linearly (which it won't!) in terms of numbers of users, the requirements on computing power (quite literally; turning GWh's into computation) grow exponentially due to an order of magnitude difference in power density but also qualitative makeup of the system (i.e., they're different data centers than the ones we already have for other tasks - see e.g. here). This means that the current massive scale-up in data centers esp. in the US is not a gamble on the future - it's a sheer necessity to just keep up with even modest growth in demand in both development of new models, and adoption by the first wave of users.

The real question in my mind is how fast we can build semiconductor manufacturing plants and nuclear power plants, since those are going to be the bottlenecks within 12-24 months' time.

Again, keep in mind we see a Chinese chipmaker comment on an industry that presently relies on non-Chinese semiconductors for data centers built in the West, for a user base outside of China. China wants a piece of that pie, and one way to get it is to stall your competitors in order to buy yourself a little time to jump into the gap. China very successfully accommodated the solar power adoption process in e.g. Europe, they're working hard to corner the EV market, and those are just two examples out of many. There's no doubt in my mind that they want to be in a pole position to power the AI adoption curve next, as it makes perfect sense from a Chinese perspective to focus their efforts there. They have energy with little regulatory overhead, they have been building an electronics and increasingly also a semiconductor industry, they could quite conceivable gain access (through very doubtful means) to the leading edge of semicon manufacturing technology, and they have very compelling needs to find new boosters for their economy.
 
The real question in my mind is how fast we can build semiconductor manufacturing plants and nuclear power plants, since those are going to be the bottlenecks within 12-24 months' time.
I agree that power production and consumption is a key challenge for the next decade. China is willing and able to quickly produce power plants (nuclear, wind) and America historically and presently is not, so I wonder who else will join the party?
 
I agree that power production and consumption is a key challenge for the next decade. China is willing and able to quickly produce power plants (nuclear, wind) and America historically and presently is not, so I wonder who else will join the party?

China built more coal-fired power plants last year, more than in any year in the previous ten years. I'll leave it at that to avoid getting political.
 
To you, perhaps. It's a big difference in many ways. One of them is legal, which is why we see people get riled up about IPR theft but the legal case to be made continues to be very weak, unless we adjust our legal frameworks to become very flexible in this regard - which would have major repercussions for the rest of society.

I think there's also a philosophical and metaphysical difference that is relevant if you talk about art (or imagery with artistic intent) in particular. It does make a difference if the pixels are all brand new or whether they are recycled. It's a different process and that carries meaning into the end result.

Then there's the fundamental technical difference that has implications not just for the makers of the technology, but also its users. All the problems we've seen last year (which resolved remarkably quickly) with Chernobyl limbs track down directly to this difference. Even with the presently commercialized model generations, this difference is relevant. A simple, real-life example: around Christmas and New Year, we've had quite a few people send us or share with us (digital) snaps of themselves/their families around Christmas trees etc. We've received/seen several where people AI-ed themselves into a more elaborate decor than they had access to - basically, "take this photo of us and put us in a cozy room with a fireplace and an elaborately decorated tree". The result is invariably that you get the people from the photo in the end result, except that they look slightly different...slightly....off. That's because the original photo is not actually duplicated into the end result - it's regenerated to look as close as possible to the original, but with the desired elements added/changed. It's just a simple practical example of how the fundamental difference can indeed have practical results ("wow, your sister looks creepy in this photo; didn't she notice this herself?")

AI programs "learn" from the results of human activity. AI doesn't capture photons of real life. So whatever methods it uses to manipulate pixels, "new" or "old", has nothing to do with photography and shooting new photos.
 
AI programs "learn" from the results of human activity. AI doesn't capture photons of real life. So whatever methods it uses to manipulate pixels, "new" or "old", has nothing to do with photography and shooting new photos.

I don’t think anyone has claimed that AI produces photographs, just imagery that resembles and can be taken for photographs. A very skilled illustrator can produce such imagery by hand, and that is often considered something admirable and of value. Photorealistic paintings mimic the look of snapshots, does that make them disingenuous?
 
China built more coal-fired power plants last year, more than in any year in the previous ten years. I'll leave it at that to avoid getting political.

Yes, coal too. I don’t think that alters the meaning of anything that I said but yay coal. 🙃
 
I don’t think anyone has claimed that AI produces photographs, just imagery that resembles and can be taken for photographs. A very skilled illustrator can produce such imagery by hand, and that is often considered something admirable and of value. Photorealistic paintings mimic the look of snapshots, does that make them disingenuous?

Many people claim that photo-shopped images where skies and other objects are cloned in and out are actual photography. Why shouldn't the next step be the claim that AI photographs are just as valid as those taken with a camera? It's a slippery slope. I was pointing out the issue.
 
Many people claim that photo-shopped images where skies and other objects are cloned in and out are actual photography. Why shouldn't the next step be the claim that AI photographs are just as valid as those taken with a camera? It's a slippery slope. I was pointing out the issue.
Many (and some of them well-known and well-regarded) film photographers would (and still do) routinely replace the skies in their landscape photos. Old news. When did photography ever get the definition of an image straight from the film? That is a false pretense and does not necessarily have any relevance to the medium. It may be how you see things, but you don't get to make the rules. And there is no such thing as an AI photograph. Only AI generated images that resemble photographs.
 
What the article ignores for some reason is that even to meet demand if it develops linearly (which it won't!) in terms of numbers of users, the requirements on computing power (quite literally; turning GWh's into computation) grow exponentially due to an order of magnitude difference in power density but also qualitative makeup of the system (i.e., they're different data centers than the ones we already have for other tasks - see e.g. here). This means that the current massive scale-up in data centers esp. in the US is not a gamble on the future - it's a sheer necessity to just keep up with even modest growth in demand in both development of new models, and adoption by the first wave of users.

Oh yes it's a gamble, don't kid yourself: From an investor's point of view, this stuff is super-speculative, and it's not immediately clear to me how they're going to transform AI from a money-pit to a cash-cow.
 
That's something else though (albeit very relevant). The thing is, the Microsofts, Amazon's and Alphabets can't afford to not play. They're all pointing big guns at each other. They can't back out even if they wanted to.
 
Oh yes it's a gamble, don't kid yourself: From an investor's point of view, this stuff is super-speculative, and it's not immediately clear to me how they're going to transform AI from a money-pit to a cash-cow.

It is a need-to-have within one's product offering, or else succumb to the loss of market share gobbled up by competitive offerings that say their products include it. Once the public gets over its fear of AI and learns of the benefits that can be derived from its inclusion in different products.
Imaging being about to tell an AI TV set..."Optimize tonight's tuning schedule to my preferred programs or genre, and program that into tonight's selections." Would you buy a TV that could not do that each night?!
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom