5760x1440 dpi chat gpt says
but chat gpt says this is marketing , real one 1200x1200 dpi
It depends on the imaging system. See
@wiltw's response above, which refers to color/multi-channel inkjet prints. There's a similar story to laser prints (which is relevant for you given your idea of
using a copy center to print your negatives). A laser printer can really only print small dots of identical size, and these dots are 100% black - no in-between grey scales. This means that to image a pixel that will have shades or grey (or color), a laser printer will need to print a number of closely spaced dots that, together with the white space between them, create the required shade when viewed at a distance. Inkjet has a little more flexibility as most photo-oriented inkjet systems can to an extent vary dot size (although this is also quite limited).
When you see a resolution of 1200dpi, it's nearly always in reference to a laser printer. Resolutions like 1440dpi and multiples thereof are nearly always inkjet resolutions. There's no particular reason for this; it's a bit of a historical coincidence that things developed this way.
To understand the real resolution in terms of PPI from the 1200dpi laser printer, let's assume the printer needs to be able to create 256 shades (including white) by clustering small dots in a grid pattern. Since every dot is either white (unprinted) or black (printed), this means that an 16x16 (=256) grid is needed to create all required shades. This means that the effective ppi of the laser printer is actually 1200/16 = 75ppi. That's a pretty abysmal resolution, and in practice it isn't quite so bad since the grid approach I just described is only a very crude and simplistic way of arranging individual dots; it's what you get when you use a typical 1990s laser printer with AM (amplitude modulation) halftone screening (see example below;
taken from here). So your typical HP LaserJet II from 1994 or so may indeed be limited to about 75ppi prints. However, printer manufacturers adopted FM screening (frequnecy modulation) which, very simplistically put, scatters dots in a quasi-random fashion instead of printing a neat little grid. The results is the little wormlike patterns you may have seen when looking at a laser print from up close. Inkjet printers also use FM screening; moreover, they rely on dots bleeding together a little as the ink fans outward into the paper a tiny bit, making them blend together a little (this is not possible with laser prints).
To cut a very long story short, it turns out that most of the desktop printing systems we generally use have an effective imaging resolution of around 300-360dpi - although a concrete number is impossible to give when FM screening is involved. However, resolving power is somewhere along these lines in a best-case scenario, although for inkjet digital negatives, you often end up at a little less than this. I frequently print cliche's for PCB manufacturing on my Epson 3880 inkjet printer and the smallest features it will reliably image are around 0.12-0.15mm or so, which means that the real resolution of the black ink channel used in that application is around 200-250dpi.
This means that in practice, when you output to print, it's usually sufficient to stick to the 300dpi that
@loccdor mentions.
Consequently, an A3 sized full-page print (11.7x16.5") will print OK from a file that's 3510x4950px in size, or ca. 17 megapixels.
Does that mean you only need a 17 MP image? Not necessarily, since the actual resolving power of a digital camera is not necessarily equal to the pixel dimensions of its sensor. The simple summary of all that is that to get a really excellent 17MP image, you may have to start with an original digital camera capture that's larger - perhaps 20-30MP. However, this depends a lot on the taking conditions (think of motion blur etc.), the lens used etc.
On the other hand, if you consider the typical viewing distance of an A3 print, people will take a little distance to appreciate the image. This means that the human eye resolving power which may indeed be around 300dpi under optimal conditions (but alas - anyone over 40 probably doesn't qualify anymore to begin with!), but we won't usually be able to distinguish such small features. So in practice, we may often get away with a
smaller file than the 17 MP mentioned above and still get a decent A3-sized prints. If you look at threads from the period 2005-2010 or so, when people were using digital cameras with 5-10 MP resolution, you'll notice that plenty of people say they've made excellent 16x20" prints from their (by today's standards) tiny digital files. How many pixels you need, depends a lot on the type of image, viewing distance, printing process etc.
There's no hard & fast answer.
In practice, I find that the limitations of digital negatives aren't so much in the digital file you start with. The real problems are things like banding, interference artifacts that turn up as lines or b-like patterns (esp. when using laser printers/copiers for making negatives), coarse transitions (posterization) in smooth gradients and an overall 'grainy' look as a result of the individual inkjet or laser dots showing up in the final print. Optimizing the quality of the final (kallitype, van dyke etc.) print has far more to do with optimizing the quality of the negative itself, which means controlling the printing process parameters, than with the digital image you start with.