Let us start with assumption that 24MPixel camera = 6000H x 4000V pixel sensor, 36mm x 24mm dimension. OP stated it ends up in a 10m x 6m tall image viewed from 1.8m away.
So the original 4000 pixels of detail is spread across 6m final image size, so each pixel of the image now covers 1.5mm at that size. If you pixel replicate the original image to expand it from 4000 pixels tall to now occupy 8000 pixels tall, the 'stair stepping' of all non-vertical/non-horizontal edges is half as apparent, with each individual pixel covering 0.75mm size, but with no greater 'detail' perceivable in the image. It is really easy to stand 1.8m away from a metric (millimeter) ruler to see how well your eye would perceived individual lines and spaces, where each was 1mm wide.
So what does 'higher pixel count' result in... 'more detail'? Read on...
I made a rudimentary image to show the results of using pixel replication to resize the total pixel count to higher numbers, but viewed with same final size (overall height):
- 1-pixel wide horizontal and vertical line pairs separated by 1 pixel white space, and also diagonal lines.
- I then resized image to 200%, but display it on screen so that the line lengths are per the original size vertically/horizontally... the width and spacing are two pixels (not one pixel)
- I then resized image to 400%, but display it on screen again so that the line lengths are per the original size vertically/horizontally... the width and spacing are four pixels (not one pixel)
No 'more detail', lowered perception of 'stair stepping ' (also called 'aliasing'). Also notice that the horizontal and vertical lines do not necessarily appear to be any 'sharper' with higher pixel count for the final image!