That is a view, not a "settled" matter.
The interactive complexity of the brain, the nervous system in general and the other tissues of the body really is a settled matter. For as long as mankind has studied its grey matter, it has observed that it didn't exist in the isolation of a petri dish.
None of what I said suggests that the nervous system wouldn't behave in accordance of the human construct of 'physical laws'. Of course it does. The (human) acceptance of these laws is in the end an empirical matter. We base it on study of our environment. Evidently, there's no magic at work - although the
sense of magic is an interesting aspect of the human experience I'd challenge your thought experiment with. Speaking of that thought experiment - I don't see the merit to it. The whole essence is in the practical question of feasibility and the phenomena we'd observe if we could make it work. Would the actual machine (AI) develop consciousness? If so, how would we be able to observe it? Will it have an identity and the ability to reflect? If so, how will we recognize it, and how will these things be different from ours? Will the machine invent a god? We know the present generation of AI 'hallucinates' - but that, again, is a human construct. As far as we can tell so far, the machine doesn't believe anything. Heck, we don't even begin to understand what 'believing' constitutes.
And that's only the stuff that's happening on the inside - although arguable, all of it is by necessity (at least in the case of the human brain) emergent also on the basis of interactions with the environment.
What your mechanistic outset simply doesn't acknowledge or probably even allow is the myriad ways in which the human physiology, and by extension its neurology and ultimately psychology, only resemble 'machines' very superficially. When it comes to AI as an attempt to recreate that hypothetical 'machine', it has only so far explored the very basic mechanisms - the idea of an input/output system, a set of interconnected action centers (neurons) and a way these can be reconfigured. For one thing, we still do this in deterministic binary space, and we don't even know if that's a suitable way to model what happens in biological neural networks. We don't even know at this point whether those extremely fuzzy, abstract and high-level phenomena as I've mentioned above can emerge merely from the interactive complexity within a basically deterministic mechanism like a computer-based neural network.
The problem is that we're trying to model the Mona Lisa using Lego bricks, in the assumption that if you take a sufficient number of Legoes that are sufficiently small and arrange them in the right pattern, you should be able to recreate the painting. It's questionable if that's true. The odds are that we haven't developed the language or concepts yet that allows us to model this supposed 'machine'. Yet, hubris makes us believe that we're totally brilliant with our box of Legoes. Or at least, some of us think so.
We've not as much as scratched the surface.