Introduction

“No matter how technically advanced a 2D camera is, it will always deliver a flattened and fattened image of the human face and form.” *

Body Image Distortion in Photography:
How 3D Reveals
What 2D Distorts

This website reports the aims, methods and conclusions of eight 2D and five 3D experiments by Dr. Bernard Harper from 1999 and 2015. He investigated the dramatic flattening and fattening effects in vernacular and professional 2D photography*,  discovering that monocular perception always distorts face and body size estimations.
This effect is highly significant, with an innate gender bias that always fattens females to an excessive and often disturbing degree.  Strong claims are made about the inability of 2D photography to accurately record the true size, shape and attractiveness of all people.

These effects constitute a new type of photographic aberration.

Claims like those made here require exceptional supporting evidence. The following sections offer the most extensive evidence from calibrated methodologies of images of real people and inanimate objects, offering clear explanations as to why BID in 2D occurs  and how it can be reduced in common practice.

This stereoscopic pair are the calibration images projected life-size in every 3D BID experiment. They are reproduced as a “free-view” stereo pair, allowing a true 3D image to be reproduced on any hi-res 2D screen. The left eye’s image is on the right side of the screen for cross-eyed stereo viewing: Focusing on the central black border slightly crossing your eyes should enable most people to fuse a true 3D image. The 3D image should appear centrally between the two 2D photos and be sharp with corrected vision. Moving back from the screen will help first timer viewers to see a miniaturized version of how the pair looked when projected life-size in 3D. Even at this reduced scale, it should be obvious that the model looks slimmer and more natural than either of the two 2D photos that form the ortho-stereoscopic image.

This gif is an animation of the first stereo pair for people who cannot freeview 3D. It is important to note that when the image was projected life-size, the background and foreground scaling and distances were reproduced with millimetre accuracy. With the camera position, background and lighting precisely reproduced (and the actual model returned to her pose), her adjacent projected 3D image was an almost perfect virtual twin of the real person.*

Life-size 3D projection allowed every participant and model to see that orthostereoscopic imaging was an uncannily accurate method for reproducing virtual humans.

Convergence orthostereoscopic image capture and projection also allowed us to investigate if the size reductions seen in the feasibility studies were due to unwanted side-effects, such as micropsia or macropsia. Small increases and decreases in size were seen when the projectors were slightly converged or diverged away from the ideal alignment. But these conditions were uncomfortable and induced ghosting, which were completely eradicated as the convergence was realigned at the point of focus (for the original cameras) and at the gaze point of the test subjects. When realigned, all virtual targets had exactly the same size and shape as their real “twins” and could be viewed for unlimited periods without discomfort.

3D Always Delivers a Slimmer and More Natural Image

Below is an animation of a CGI peanut shape. It was  projected to human torso size in a series of non-human versions of the first 3D experiments (using the same projection method as the 3D virtual humans above). The waist visibly narrows when viewed in 3D because we see more of the background behind it than we can see in in 2D.  This would be an even larger slimming effect if the it were of a face and neck viewed at conversational distances, rather than the 1.6 metres distance shown here.

This CGI animation shows the “peanut” shape in transition from its average size rendered in 2D to its perceived shape in 3D. It is always perceived as slimmer in 3D, and is judged progressively slimmer as wider stereoscopic disparities (distances between lenses) are projected . These 3D images too were ortho-stereoscopically projected at life-size using linear polarization. For most subjects it was the first time they had ever seen a 3D image.

The life-size 3D projections gave
participants a strong impression they were in the presence of a real person, or a startlingly realistic waxwork sculpture. The CGI shapes also looked “real,” and both types of 3D image were always perceived as significantly slimmer than in otherwise identical 2D projections. This slimming effect was so immediately apparent that the experiment protocols mixed synoptically projected 2D images and varying IPD 3D images in a randomised order. This was so viewers never saw the same person twice, or the same 2D or 3D condition show consecutively. So they always had to make a new judgement from a different condition.

Remarkably, viewers were very consistent in their size estimations despite these interventions. This strongly indicated that the effect was not an artifact of stereoscopic presentation order, but a consistent percept of a highly predictable and repeatable  phenomena. The experiments were run by psychology masters students who were invited to critique the methods and improve them if necessary.
However no matter which way the experiments were run, the slimming effect of binocular disparity was always present and always statistically significant.

For more information regarding the 3D experiments and the 3D technology used, contact:
bernieharper@gmail.com

* The quotes above come from the author when presenting the BID research website for the first time to at a 3D filmmaking event at the British Film Institute. This website presents the same research for media professionals, photographers, filmmakers or anyone wanting to understand how professional quality 2D imaging can have pervasive flattening and fattening effects. These 2D effects will inevitably lead to significant perceptual distortions of size, shape, texture and even colour. The 35mm stereography used Fuji 100 Daylight Transparency Film. It was remarkably accurate at reproducing the original skin tones, colours and the key tonal ranges of the models, their clothing and the background. Other methods have lower resolution and may have poorer colour fidelity.  All of the original photography, research, experiments and data reported are the copyright of the author. They must not be reproduced in whole or in part without the explicit written permission of Bernard Harper PhD. The CGI images were created by Phil Berridge of 2020 Design. All other images are used for educational purposes.

Questionnaire: Next >

Hits: 190