ABBA sang Money Money Money ….
Ai is about nothing but money, in its every application. It’s a primitive way to enhance our live, and what sells progresses, takes over, and then … it is over. Humans will have become robots of their own making.
Wow.
ABBA sang Money Money Money ….
Ai is about nothing but money, in its every application. It’s a primitive way to enhance our live, and what sells progresses, takes over, and then … it is over. Humans will have become robots of their own making.
Is AI all about money? Or is it just the tool being used by those who are all about money? There are many applications of AI that are about speed and efficiency and although those can relate to money, they also relate to benefits to mankind and society. It is all about how AI is applied and by whom.Ai is about nothing but money, in its every application.
sorry wrong link this one should be right--> https://x.com/EHuanglu/status/2023449238114320514?s=20
Chinese director gives it a whirl. This is Seedance 2.0, if you notice any oddities or anything slightly off, I would expect that to be gone in Seedance 3.0.
In the hands of a skilled artist, what does this AI work become? Is it still slop?
Has anyone discussed how this technology might be equitable, removing all gatekeepers and barriers to entry? Anyone in any situation could be their own film studio. Is being against it, being against equitable outcomes for all? Who am I to demand everyone must play the game (film school, grunt work, clawing your way up the industry ladder, etc)? I might not like it, but objectively, it means that soon, anyone in any part of the world or financial status could create a 300 million dollar blockbuster film. Is that a bad thing? Will it all be slop or will the cream rise to the top? Are we only being subjective about this opportunity?
There may also be environmental questions. A blockbuster film is a city sized effort, estimated 3,500 tons of pollution at the core of it. An AI blockbuster would be estimated 1 ton. "But more people will make more films, creating more pollution".. Well, a new photonic/metamaterial based chip is already being built and heavily backed. 1 of these chips = 100 GPUs and is 1% the power usage of 100 GPUs. Eventually a film studio capable system will run locally on a cellphone.
Don't get me wrong, I am quite irked AI companies have chosen to encroach on the arts. I would rather those cycles be put elsewhere. At the same time, am I the one with the problem? Am I refusing to be objective due to feeling less special, or humans becoming less required in art? It's a lot to ponder.
Was it related to that river cruise boat going something like 65mph while not making a massive bow wave? That one was gold! Honestly that was also the only bit that struck my eye (I was also not looking specifically for give-aways), apart perhaps from the awkward facial expression of the 'AI actor' (which, of course, they all were...but the one in the funny suit).I only found one instance that would have made me suspect AI.
AI of course did not notice a continuity error in one scene.. but that's not its job.
Yet.
I would expect it would be excellent for that purpose, once set up appropriately.
Was it related to that river cruise boat going something like 65mph while not making a massive bow wave? That one was gold! Honestly that was also the only bit that struck my eye (I was also not looking specifically for give-aways), apart perhaps from the awkward facial expression of the 'AI actor' (which, of course, they all were...but the one in the funny suit).
My understanding is that the AI model only creates individual shots which would then be assembled into a scene during editing - same as a traditional movie.
The continuity error I spotted is not an image generation problem, but an edit error
The video image generation is amazing.
I guess that agents are going start to be licensing talent imaging rights rather than the talent themselves over the next few years.
I was limiting my comments only to the example provided, which is of course only one possible use for AI even in film/tv production - an area in which I am very well acquaintedYou are limiting your consideration of AI here to a tiny snippet of the wide gamut of potential applications.
My understanding is that the AI model only creates individual shots which would then be assembled into a scene during editing - same as a traditional movie.
The continuity error I spotted is not an image generation problem, but an edit error
The video image generation is amazing.
I guess that agents are going start to be licensing talent imaging rights rather than the talent themselves over the next few years.
An interesting use of AI is to have it critique a script or cut, rather than to have it originate work. It is quite good at providing feedback.I was limiting my comments only to the example provided, which is of course only one possible use for AI even in film/tv production - an area in which I am very well acquainted
The potential outside of that scope is also limitless and is going to be extremely disruptive, even in the short term.
But every technology advance provides opportunities as well as disruption, so the world will adapt.
This is in part a linguistical issue, but it pertains to very fundamental questions as well - I doubt whether we can state that AI "knows" anything. AI optimized similarity in a directed, controlled manner. That different from 'knowing'. Compare it to savantism. This draws focus on questions like what cognition really is and what 'knowing' entails (epistemology, basically).The ai knows what looks like what
Mind you - humans are limited in similar ways, at least many of us, although the distribution of aptitude and ineptitude is quite different between humans and AI.
Maybe ai is showing us we're not as special as we thought, and that is hugely painful for some to deal with?
I think you're grossly overestimating the present architectural influence at a systemic level that LLM's have on development work. This is virtually nonexistent at present. This will likely change at some point, but it draws us back to the question I asked earlier about possible laws of diminishing returns. Presently it's not entirely clear how fundamental of a step AI can make to improve itself and that has everything to do with the cognition issue I highlight above.I'm not overly convinced this all peaks with LLMs, I think as ai's continue to code the ai's, new systems will emerge, especially new ai designed hardware infrastructures tailored for it.
The problem that this is tautological. Our human understanding presently is of our brains working similarly to the neural networks we have based AI on. That's how we conceptualized AI in the first place. But this is also a relatively mechanistic approach, and we don't really know whether it accurately captures how cognition in biological (esp. human) systems works. Moreover, there is still the open question concerning complexity; we have attempted to recreate the cumulative effect of gigantic numbers of relatively simple nanostructures (interconnected neurons) interacting in AI, but in doing so, we have had to make several assumptions as well as allowances for the specific hardware - and silicon ultimately is not the same as fatty acids. Thus, in the quote above, AI appears to be prone to a restricted view that's inherent to how it was created and more importantly the prevalence of views on how cognition works.Humans are also, at some level of description, doing very sophisticated pattern completion — we just have the benefit of calling our version "understanding.
Ah come on. AI 'runs' on a complex architecture of sociotechnical systems. We can take the 'socio' part out by removing humans from the loop to a large extent, but that still leaves a complex technical system that goes far beyond relatively simplistic, detail issues like chip design. The real architecture we're talking about is the industrial ecosystem that comprises the entire semiconductor industry as well as its path dependency (or, conversely, the potential for architectural and radical innovation), but also capital structures, geopolitical dynamics etc. The notion that "the feedback loop is forming" is like saying about a toddler who has figured out how to stand up for 5 seconds without toppling over that he's developing nicely towards a figure skater.writing it off as "virtually nonexistent" understates the velocity. AlphaCode, AI-assisted chip design (Google's recent work), and reinforcement learning on synthetic data are all early signals that the feedback loop is forming, even if it hasn't closed yet.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?