• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

Artificial intelligence.

Recent Classifieds

Forum statistics

Threads
203,105
Messages
2,849,896
Members
101,670
Latest member
JeremiahPeterson
Recent bookmarks
0
sorry wrong link this one should be right--> https://x.com/EHuanglu/status/2023449238114320514?s=20

Chinese director gives it a whirl. This is Seedance 2.0, if you notice any oddities or anything slightly off, I would expect that to be gone in Seedance 3.0.

In the hands of a skilled artist, what does this AI work become? Is it still slop?

Has anyone discussed how this technology might be equitable, removing all gatekeepers and barriers to entry? Anyone in any situation could be their own film studio. Is being against it, being against equitable outcomes for all? Who am I to demand everyone must play the game (film school, grunt work, clawing your way up the industry ladder, etc)? I might not like it, but objectively, it means that soon, anyone in any part of the world or financial status could create a 300 million dollar blockbuster film. Is that a bad thing? Will it all be slop or will the cream rise to the top? Are we only being subjective about this opportunity?

There may also be environmental questions. A blockbuster film is a city sized effort, estimated 3,500 tons of pollution at the core of it. An AI blockbuster would be estimated 1 ton. "But more people will make more films, creating more pollution".. Well, a new photonic/metamaterial based chip is already being built and heavily backed. 1 of these chips = 100 GPUs and is 1% the power usage of 100 GPUs. Eventually a film studio capable system will run locally on a cellphone.

Don't get me wrong, I am quite irked AI companies have chosen to encroach on the arts. I would rather those cycles be put elsewhere. At the same time, am I the one with the problem? Am I refusing to be objective due to feeling less special, or humans becoming less required in art? It's a lot to ponder.
 
ABBA sang Money Money Money ….

Ai is about nothing but money, in its every application. It’s a primitive way to enhance our live, and what sells progresses, takes over, and then … it is over. Humans will have become robots of their own making.

If AI takes over, will it still be about money? Are greed and power baked into the programming?
 
ABBA sang Money Money Money ….

Ai is about nothing but money, in its every application. It’s a primitive way to enhance our live, and what sells progresses, takes over, and then … it is over. Humans will have become robots of their own making.

Outside of the creatives AI has been very helpful in my life. You just have to harness it instead of the other way around. I'm in control here, not the computer.
 
Ai is about nothing but money, in its every application.
Is AI all about money? Or is it just the tool being used by those who are all about money? There are many applications of AI that are about speed and efficiency and although those can relate to money, they also relate to benefits to mankind and society. It is all about how AI is applied and by whom.
 
sorry wrong link this one should be right--> https://x.com/EHuanglu/status/2023449238114320514?s=20

Chinese director gives it a whirl. This is Seedance 2.0, if you notice any oddities or anything slightly off, I would expect that to be gone in Seedance 3.0.

In the hands of a skilled artist, what does this AI work become? Is it still slop?

Has anyone discussed how this technology might be equitable, removing all gatekeepers and barriers to entry? Anyone in any situation could be their own film studio. Is being against it, being against equitable outcomes for all? Who am I to demand everyone must play the game (film school, grunt work, clawing your way up the industry ladder, etc)? I might not like it, but objectively, it means that soon, anyone in any part of the world or financial status could create a 300 million dollar blockbuster film. Is that a bad thing? Will it all be slop or will the cream rise to the top? Are we only being subjective about this opportunity?

There may also be environmental questions. A blockbuster film is a city sized effort, estimated 3,500 tons of pollution at the core of it. An AI blockbuster would be estimated 1 ton. "But more people will make more films, creating more pollution".. Well, a new photonic/metamaterial based chip is already being built and heavily backed. 1 of these chips = 100 GPUs and is 1% the power usage of 100 GPUs. Eventually a film studio capable system will run locally on a cellphone.

Don't get me wrong, I am quite irked AI companies have chosen to encroach on the arts. I would rather those cycles be put elsewhere. At the same time, am I the one with the problem? Am I refusing to be objective due to feeling less special, or humans becoming less required in art? It's a lot to ponder.

That is an amazing video. I tried to watch it without looking for AI tells, which is hard to do if you know that it’s AI, but still I only found one instance that would have made me suspect AI. Incredible.
 
I only found one instance that would have made me suspect AI.
Was it related to that river cruise boat going something like 65mph while not making a massive bow wave? That one was gold! Honestly that was also the only bit that struck my eye (I was also not looking specifically for give-aways), apart perhaps from the awkward facial expression of the 'AI actor' (which, of course, they all were...but the one in the funny suit).
 
AI of course did not notice a continuity error in one scene.. but that's not its job.
Yet.

I would expect it would be excellent for that purpose, once set up appropriately.
 
I would expect it would be excellent for that purpose, once set up appropriately.

My understanding is that the AI model only creates individual shots which would then be assembled into a scene during editing - same as a traditional movie.

The continuity error I spotted is not an image generation problem, but an edit error :smile:

The video image generation is amazing.

I guess that agents are going start to be licensing talent imaging rights rather than the talent themselves over the next few years.
 
Was it related to that river cruise boat going something like 65mph while not making a massive bow wave? That one was gold! Honestly that was also the only bit that struck my eye (I was also not looking specifically for give-aways), apart perhaps from the awkward facial expression of the 'AI actor' (which, of course, they all were...but the one in the funny suit).

At 4:14 the way the set travels and disappears seemed AI to me.
 
My understanding is that the AI model only creates individual shots which would then be assembled into a scene during editing - same as a traditional movie.

The continuity error I spotted is not an image generation problem, but an edit error :smile:

The video image generation is amazing.

I guess that agents are going start to be licensing talent imaging rights rather than the talent themselves over the next few years.

You are limiting your consideration of AI here to a tiny snippet of the wide gamut of potential applications.
A continuity checking application would be a natural, because continuity checking depends on pattern recognition, something that AI tools are extremely proficient at.
It wouldn't be used just with AI generated video. It would be equally valuable with movies shot the old fashioned way.
 
You are limiting your consideration of AI here to a tiny snippet of the wide gamut of potential applications.
I was limiting my comments only to the example provided, which is of course only one possible use for AI even in film/tv production - an area in which I am very well acquainted

The potential outside of that scope is also limitless and is going to be extremely disruptive, even in the short term.
But every technology advance provides opportunities as well as disruption, so the world will adapt.
 
My understanding is that the AI model only creates individual shots which would then be assembled into a scene during editing - same as a traditional movie.

The continuity error I spotted is not an image generation problem, but an edit error :smile:

The video image generation is amazing.

I guess that agents are going start to be licensing talent imaging rights rather than the talent themselves over the next few years.

Time to take up golf.
 
I was limiting my comments only to the example provided, which is of course only one possible use for AI even in film/tv production - an area in which I am very well acquainted

The potential outside of that scope is also limitless and is going to be extremely disruptive, even in the short term.
But every technology advance provides opportunities as well as disruption, so the world will adapt.
An interesting use of AI is to have it critique a script or cut, rather than to have it originate work. It is quite good at providing feedback.
 

Yeah, it has gone beyond VFX and CGI as we have always known it. The ai is not creating CGI in the traditional sense. As in, it is not building extensive wire frames, lighting effects and renders. The ai knows what looks like what and it simply shows you a pixel by pixel representation of it. This is where a big part of the 'hollywood is cooked' comes into it I think. Hollywood has spent decades trying to re-create realism (which costs countless hours, teams, compute rendering and more) and even with all the power behind Hollywood VFX, it generally never looks truly real. The ai produced visuals will be as real as we would naturally see them. I'm sure hollywood will use and is already starting to use this to their advantage, but at this point if anyone can do the same or even better 🤷‍♂️
 
The ai knows what looks like what
This is in part a linguistical issue, but it pertains to very fundamental questions as well - I doubt whether we can state that AI "knows" anything. AI optimized similarity in a directed, controlled manner. That different from 'knowing'. Compare it to savantism. This draws focus on questions like what cognition really is and what 'knowing' entails (epistemology, basically).

Very simply put, we're presented with something that walks like a duck and quacks like a duck, but these properties may not accurately capture true 'duckism'. We're at the point now where AI probably passes the Turing test quite consistently and it even gives us the feeling of it 'understanding' things (including ourselves) - but if you get down to it, it still turns out that even the most advanced models have trouble with (or are actually fundamentally incapable of) conceptualization, grappling with causality etc. Mind you - humans are limited in similar ways, at least many of us, although the distribution of aptitude and ineptitude is quite different between humans and AI.

The interesting situation we have now is that AI can presently tell a story without it understanding the story as such, or even understanding that it's telling a story - or having any notion of what storytelling is to begin with,
 
Mind you - humans are limited in similar ways, at least many of us, although the distribution of aptitude and ineptitude is quite different between humans and AI.

(For now?) Maybe ai is showing us we're not as special as we thought, and that is hugely painful for some to deal with? With each new model release or leap, humans become more irrelevant.

I'm not overly convinced this all peaks with LLMs, I think as ai's continue to code the ai's, new systems will emerge, especially new ai designed hardware infrastructures tailored for it. It seems many are quick to consider LLMs 'the final boss', I think the LLMs are only a boot loader for what is to come.
 
Maybe ai is showing us we're not as special as we thought, and that is hugely painful for some to deal with?

That's also an effect, but it sidesteps the issue I'm pointing at. The argument I'm making is not one about superiority either way, or human vs. machine relevance; that's for me entirely beside the point. It's about a very fundamental question of what cognition is. This also has practical implications. I notice this on a daily basis when using e.g. Copilot (GPT5.1 model) when I engage it in troubleshooting of e.g. embedded systems issues. It turns out it's really fast in offering suggestions, but it doesn't actually think or understand as such. There's no reasoning behind the suggestions it puts forward. A model like that can improve and offer more suggestions that are less random, but the fundamental difference remains how these suggestions or solutions come about. Once you reach the point where the suggestions are specific and relevant enough, it may not appear to matter to the end user, but the fundamental difference is still there.

I'm not overly convinced this all peaks with LLMs, I think as ai's continue to code the ai's, new systems will emerge, especially new ai designed hardware infrastructures tailored for it.
I think you're grossly overestimating the present architectural influence at a systemic level that LLM's have on development work. This is virtually nonexistent at present. This will likely change at some point, but it draws us back to the question I asked earlier about possible laws of diminishing returns. Presently it's not entirely clear how fundamental of a step AI can make to improve itself and that has everything to do with the cognition issue I highlight above.

Again, the fact that it's a duck-like quack doesn't make it a duck.
 
For fun, I asked the new Claude Sonnet 4.6 to reply to our recent posts here:

"Koraks, the duck analogy is doing a lot of work here and I think it's both compelling and slightly misleading. You're right that something generating correct outputs through pattern completion is fundamentally different from something reasoning its way to those outputs — that distinction matters enormously for reliability, generalization, and trust. But the analogy implies the difference is permanent. The open question is whether "real cognition" is itself a different kind of thing, or whether it's what pattern matching looks like at sufficient depth and scale. We don't actually know. Humans are also, at some level of description, doing very sophisticated pattern completion — we just have the benefit of calling our version "understanding."

On Sean's boot loader point — I think this is right in direction but the timeline is genuinely uncertain. LLMs designing their own successors is happening at the margins now, but the architectural influence is still shallow. That said, writing it off as "virtually nonexistent" understates the velocity. AlphaCode, AI-assisted chip design (Google's recent work), and reinforcement learning on synthetic data are all early signals that the feedback loop is forming, even if it hasn't closed yet.

The boat without a bow wave is still the best image in this whole discussion."
 
Humans are also, at some level of description, doing very sophisticated pattern completion — we just have the benefit of calling our version "understanding.
The problem that this is tautological. Our human understanding presently is of our brains working similarly to the neural networks we have based AI on. That's how we conceptualized AI in the first place. But this is also a relatively mechanistic approach, and we don't really know whether it accurately captures how cognition in biological (esp. human) systems works. Moreover, there is still the open question concerning complexity; we have attempted to recreate the cumulative effect of gigantic numbers of relatively simple nanostructures (interconnected neurons) interacting in AI, but in doing so, we have had to make several assumptions as well as allowances for the specific hardware - and silicon ultimately is not the same as fatty acids. Thus, in the quote above, AI appears to be prone to a restricted view that's inherent to how it was created and more importantly the prevalence of views on how cognition works.

writing it off as "virtually nonexistent" understates the velocity. AlphaCode, AI-assisted chip design (Google's recent work), and reinforcement learning on synthetic data are all early signals that the feedback loop is forming, even if it hasn't closed yet.
Ah come on. AI 'runs' on a complex architecture of sociotechnical systems. We can take the 'socio' part out by removing humans from the loop to a large extent, but that still leaves a complex technical system that goes far beyond relatively simplistic, detail issues like chip design. The real architecture we're talking about is the industrial ecosystem that comprises the entire semiconductor industry as well as its path dependency (or, conversely, the potential for architectural and radical innovation), but also capital structures, geopolitical dynamics etc. The notion that "the feedback loop is forming" is like saying about a toddler who has figured out how to stand up for 5 seconds without toppling over that he's developing nicely towards a figure skater.

The AI output here demonstrates with painful obviousness how limited the technology really is when it comes to actual understanding, let alone directing its own actions. Yes, it's phenomenal what it does. But the duck analogy is really solid and the 'arguments' AI offers above are pretty much void if you dissect them (which it cannot do itself!)
 
At some point (probably when I experienced Gemini 3 pro) I gave up and started zooming out, as the overall trajectory now seems the undisputed king. So far, no wall, no flatline. Betting against the intelligence explosion happening is a logical position. It seems all criticisms since the release of chatgpt are now tiny dots on the lower line of the hockey stick. I could be completely wrong of course, but have yet to see any claims defeat the trajectory, the curve continues upward.

That being said, my assumptions are as big as those on the other side of it. I'm sure we'll know where all this goes soon enough. My prediction is that all human cognitive capability would seem trivial for super intelligence, should it be achieved.


1771503808946.png
 
Predicting developments like these in linear terms (the curve until 2028) is problematic enough. Extending the predictions with exponential leaps is just fantasizing.
What's on the vertical axis? How do you operationalize 'intelligence'? Central to my argument is that the business of trying to compare things as if they consist of a single factor is too simplistic. The question is not how it compares to humans. This is irrelevant. We already know computers calculate faster than humans and robots move faster than us. It's not the point.
 
I think time is the only thing that will resolve these questions (maybe only 12-18 months).
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom