Come to think of it, the groom might not have to, either.The photographer won't even have to go to the affair.
Come to think of it, the groom might not have to, either.The photographer won't even have to go to the affair.
That is not what it is limited to. It is used, to great benefit, to enhance or modify existing images. Nothing fake beyond what had been traditionally been done by retouching--only better.
This would have sounded insane a year ago, now it is actually not so far fetched:
"Coding dies this year. Not evolves. Dies. By December, AI won’t need programming languages. It generates machine code directly. Binary optimized beyond anything human logic could produce. No translation. No compilation. Just pure execution."
An example would be you no longer code apps, you tell the ai what you want and it renders it. Interaction within the inner and outer working of the app are seen and processed by the ai which then renders any required output. No code.
We think of the negative impacts of AI, but there are also the positive results. I only recently learned about two uses of AI that are beneifiting mankind
I am well aware of some of the criminal uses of AI, and my wife has a good friend who was victimized by what AI can do. losing thousands of dollars to fraud made possible by AI. But there are benefits from AI, too.
- AI analyzes chemical compounds and suggest (much more quickly) similar compounds that should be investigated to resolve medical issues, faster than human scientists can come up with potential solutions.
- AI analyzes existing chemical compounds with one medical use, and suggests alternate medical uses
I agree with you that AI for edits is OK. But AI to create new images isn't a photograph but rather a computer-generated graphic that looks like a photo.
Not as such, though. It generates images from noise, compares them to mathematical abstractions of images it has been fed with, and adjusts the noise generation based on the similarity. This process is moderated by the tokens given to it in the form of a prompt. Very simplistically put. But it's not the same as 'taking elements from'; it's not like a collage. I think that's an important difference, although many people won't care or even realize.it takes elements of existing ones and manipulates them
It may not literally take the physical elements but it bases the forms from existing images. Oh, maybe that is what photography does, too!Not as such, though. It generates images from noise, compares them to mathematical abstractions of images it has been fed with, and adjusts the noise generation based on the similarity. This process is moderated by the tokens given to it in the form of a prompt. Very simplistically put. But it's not the same as 'taking elements from'; it's not like a collage. I think that's an important difference, although many people won't care or even realize.
I used "makes the rule" instead of "proves the rule" to avoid being too dogmatic.![]()
It may not literally take the physical elements but it bases the forms from existing images. Oh, maybe that is what photography does, too!
A distinction without a difference. The fact is, it's manipulating bits in the computer. It's not taking a photograph of reality by capturing photons.
While it may ultimately transform our lives, the current (2026) AI scene feels like a bubble.
https://gizmodo.com/top-chinese-chi...center-buildout-plan-is-half-baked-2000720567
To you, perhaps. It's a big difference in many ways. One of them is legal, which is why we see people get riled up about IPR theft but the legal case to be made continues to be very weak, unless we adjust our legal frameworks to become very flexible in this regard - which would have major repercussions for the rest of society.A distinction without a difference.
While it may ultimately transform our lives, the current (2026) AI scene feels like a bubble.
https://gizmodo.com/top-chinese-chi...center-buildout-plan-is-half-baked-2000720567
I agree that power production and consumption is a key challenge for the next decade. China is willing and able to quickly produce power plants (nuclear, wind) and America historically and presently is not, so I wonder who else will join the party?The real question in my mind is how fast we can build semiconductor manufacturing plants and nuclear power plants, since those are going to be the bottlenecks within 12-24 months' time.
I agree that power production and consumption is a key challenge for the next decade. China is willing and able to quickly produce power plants (nuclear, wind) and America historically and presently is not, so I wonder who else will join the party?
To you, perhaps. It's a big difference in many ways. One of them is legal, which is why we see people get riled up about IPR theft but the legal case to be made continues to be very weak, unless we adjust our legal frameworks to become very flexible in this regard - which would have major repercussions for the rest of society.
I think there's also a philosophical and metaphysical difference that is relevant if you talk about art (or imagery with artistic intent) in particular. It does make a difference if the pixels are all brand new or whether they are recycled. It's a different process and that carries meaning into the end result.
Then there's the fundamental technical difference that has implications not just for the makers of the technology, but also its users. All the problems we've seen last year (which resolved remarkably quickly) with Chernobyl limbs track down directly to this difference. Even with the presently commercialized model generations, this difference is relevant. A simple, real-life example: around Christmas and New Year, we've had quite a few people send us or share with us (digital) snaps of themselves/their families around Christmas trees etc. We've received/seen several where people AI-ed themselves into a more elaborate decor than they had access to - basically, "take this photo of us and put us in a cozy room with a fireplace and an elaborately decorated tree". The result is invariably that you get the people from the photo in the end result, except that they look slightly different...slightly....off. That's because the original photo is not actually duplicated into the end result - it's regenerated to look as close as possible to the original, but with the desired elements added/changed. It's just a simple practical example of how the fundamental difference can indeed have practical results ("wow, your sister looks creepy in this photo; didn't she notice this herself?")
AI programs "learn" from the results of human activity. AI doesn't capture photons of real life. So whatever methods it uses to manipulate pixels, "new" or "old", has nothing to do with photography and shooting new photos.
China built more coal-fired power plants last year, more than in any year in the previous ten years. I'll leave it at that to avoid getting political.

Yes, coal too. I don’t think that alters the meaning of anything that I said but yay coal.![]()
I don’t think anyone has claimed that AI produces photographs, just imagery that resembles and can be taken for photographs. A very skilled illustrator can produce such imagery by hand, and that is often considered something admirable and of value. Photorealistic paintings mimic the look of snapshots, does that make them disingenuous?
Many (and some of them well-known and well-regarded) film photographers would (and still do) routinely replace the skies in their landscape photos. Old news. When did photography ever get the definition of an image straight from the film? That is a false pretense and does not necessarily have any relevance to the medium. It may be how you see things, but you don't get to make the rules. And there is no such thing as an AI photograph. Only AI generated images that resemble photographs.Many people claim that photo-shopped images where skies and other objects are cloned in and out are actual photography. Why shouldn't the next step be the claim that AI photographs are just as valid as those taken with a camera? It's a slippery slope. I was pointing out the issue.
What the article ignores for some reason is that even to meet demand if it develops linearly (which it won't!) in terms of numbers of users, the requirements on computing power (quite literally; turning GWh's into computation) grow exponentially due to an order of magnitude difference in power density but also qualitative makeup of the system (i.e., they're different data centers than the ones we already have for other tasks - see e.g. here). This means that the current massive scale-up in data centers esp. in the US is not a gamble on the future - it's a sheer necessity to just keep up with even modest growth in demand in both development of new models, and adoption by the first wave of users.
Oh yes it's a gamble, don't kid yourself: From an investor's point of view, this stuff is super-speculative, and it's not immediately clear to me how they're going to transform AI from a money-pit to a cash-cow.
| Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links. To read our full affiliate disclosure statement please click Here. |
PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY: ![]() |
