Contact

.

Contact details for Dr Harper:

bernieharper@gmail.com

.
.
Mission Statement: “This website aims to bring a unique body of photographic research into the public domain while forging new links between experimental psychology, neuroscience and real-world photographic practices.”
.
.

.
.

.

 

This image and the URL below it are from visualisation research that can be found on the Cinematographers Mailing List website. The animations were created by its founder, the digital cinematographer and eminent technical author Geoff Boyle NFC FBKS

The previsualisation animations demonstrate the extensive body image distortion found in every 2D movie, television production and still photograph.  For professional image makers, they confirm through mathematical visualisation and 3D modelling what the 2D experiments in the PhD research have measured perceptually. No photographer has been found who can see the original size and shape of any target once it has been distorted by 2D lenses.  The logical conclusion from these studies is we need to impose new standards for body image constancy and naturalness in filmmaking. This will ensure that startling distortions of size, shape, attractiveness (and motion depiction) are always a creative choice, rather than an unintended and consequence of focal length, camera to subject distance and conventional 2D filmmaking.

.
.

.

 

.

 

.

Future-24 Film Process

A Proposal by Bernard Harper PhD

Currently, the look of traditional 24P movie-making on 35mm film is the most highly valued commercial aesthetic in the world.   “Filmic” digital image making is so highly prized that all expensive TV productions must look like as if they have been shot on movie film to aesthetically distance them from low-cost television. However, 24P also has multiple visual artifacts that distort naturalness  (see perspective animations above) while constraining creativity, shooting costs/efficiency and a range of necessary developments. Smooth motion perception, high brightness cinema, depth and distortion-free naturalness are all currently impossible to deliver with conventional 24P movie making.
Worse still, for every minute of an actor’s performance, the standard 180° shutter angle fails to record 30 seconds of it!
Current generation High Frame Rate cameras can record every nuance of performance and motion. But they are incompatible with industry standard single-lens projectors,* and have video-like artefacts rendering them “uncinematic” with sometimes shockingly poor aesthetics.  Future-24 however can solve all of these problems while offering new opportunities for directors, the creative teams and even the commercial dynamics of modern filmmaking.

The Future 24 concept proposes an enhanced movie making process using a new type of Follow-Focus Matte Box and incorporating a hidden sensor array. The F-24 Process would allow any film or digital movie camera to be used,  giving precise follow focus while recording enough extra information to future-proof the production. This is because 24P can be enhanced in post using oversampled image and sound data captured by the  F-24 follow-focus equipment.

Future-24 would deliver: 
Precision Follow-Focus at all apertures
Subliminal Performance Capture in a single shot without body markers
Automated Foreign Language Lip-synch
Subliminal Scene Depth Capture using the same “single shot” method
Subliminal HFR motion and panning delivered without video-like artifacts.
New Standards for distortion-free Body Image Constancy, Naturalness, Colour Constancy &
Precision Sound Recording
For 3D versions, the 3D depth used for precision follow-focus at all apertures is recorded, allowing post-production conversions to incorporate the original 2D cinematography without expensive 2D to 3D reprocessing.

Core Principle: The 24P Film Aesthetic is Paramount. The Future-24 Film Process will be backwards compatible with film or digital cameras, allowing 24P methods and the primary image created within any film or digital camera to be unaffected. F-24 aims to retain the highly prized filmic aesthetic while recording all of an actor’s performance. The additional data can be used to reduce perceptual distortions and record natural motion, size & shape constancy and compatibility with future AV systems. It will capture enough additional temporal information to improve performance reproduction[1], reduce judder in camera moves while not increasing the underlying frame rate.
The goal is a movie or TV look that can be totally traditional, or from the same data set offer: Enhanced 2D, 3D, HFR, VFR, HDR, Multi-Spectral colour reproduction, Subliminal Performance Capture, Perfect Automatic Lip-Sync, undistorted body size rendering and Enhanced Sound Reproduction.

Method A new type of matte box will allow for precision follow-focus (at any aperture) but can also enable new types of shots and much more.  The device will not interfere with standard practices and would contain an array of lenses (and microphones) to allow any 24P prime lensed image to be enhanced naturally, aurally and subliminally.
The Enhanced Matte Box will also capture real depth and multi-spectral colours by over-sampling, giving two unique benefits: Interpolation-free Slow Motion and enough data to ensure that deep-fake misrepresentations can be proven to be distortions of the original scene.  So much useful information will be captured on-camera that it will speed-up filming, allowing digital prosthetic make-up, automatic foreign language dubbing and sophisticated post production VFX without precision body marking or green screening on the set. The time saving of these factors alone justifies the development and use of the Future 24 process.

Performance Capture  F-24 would record facial expressions and depth at 96 fps and yet not interfere with the primary image. Statistical analysis of a markerless dot cloud performance will allow the director to select the 24-frame subset that best conveys an actor’s facial performance. Dot cloud “markers” will be visible, allowing the raw footage to be graded normally. But if performance capture is required, the point data will be recoverable for digital make-up, complete replacement of the whole performance or simply the lips for perfect foreign language dubbing.

When 24 frames per second are not enough, additional sub-frames can be added to enhance the emotional content while being subliminally incorporated into the performance.  The resulting performance aesthetic would have the overall look of 24P but with a subliminal performance rate of 60P or higher. This manipulation is possible because the human visual system processes global and local motion in different areas of the brain.

This allows subliminal manipulations in post production. These interventions will enhance naturalness, yet avoid the dreaded video-look ridiculed by most practitioners and critics.

In combination with improved projection technology, camera moves without judder can be projected without the TV “soap opera” look so disliked by audiences too. Yet when the classic TV aesthetic is required, precisely the same equipment and shooting methods can deliver it from the same footage. Subliminal motion capture will allow actors to work without markers (to replace lip motions with perfect animations automatically sync to any foreign language) or to allow the whole performance to be replaced if the actor is no longer suitable for the role.

Conclusion
The world’s best cinema projector manufacturers have conducted what may be the greatest confidence trick of the 21st Century.
They have convinced countless cineastes and TV hating critics they have been watching “films” for the last 20 years, when they have actually been watching the world’s largest digital TVs!  * The problem for filmmakers is that whenever they try to advance the art and science of digital movies, projections systems not designed for the task will create obvious artefacts, such as poor brightness, distorted colour fidelity and ugly video-like motion the can devalue aesthetic appeal to the point of ridicule and disrepute. Video-look can appal critics so much that they will say they are reviewing a HFR movie, when they are actually criticising projection technology artifacts correctable at the projection stage.

      IMO there is no possibility that cineasts will change their tastes, or that projector manufacturers will solve the problem on our behalf. The camera-side creatives and innovators will therefore take the blame for these disasters, so we must also take responsibility to fix the problems ourselves using our own intellectual resources. And in doing so, there is the possibility for new types of shot and production methods that can enhance all productions, regardless of how traditional or sophisticated the final aesthetic needs to be.

The Future 24 Film Process can be that solution.
In addition to native film or digital 24P 2D image capture, oversampling will capture enough information to ensure follow-focus that is pixel accurate while recording vast amounts of useful data. This can later be used to enhance 24P filmmaking and future-proof the filmmaker’s art.
It is also possible that Future 24 enhanced motion would only be seen in  cinemas, which would be a commercial USP even traditionalist might think is useful!
If this process became standard, every “film” could be shot with HFR 3D, even when the final deliverable is for a 2D TV version.  Temporal, chromic and spatial oversampling allows the resolution, frame rates, colour and depth of 2K 2D photography to be automatically “up-ressed” to the next level without any visual artefacts. The concept is in fact one of the oldest in commercial cinema.  It is about time 100 year-old thinking (perfected by Charlie Chaplin[2]) is brought back into the 21st Century, where it can once again enhance and future-proof traditional 24P filmmaking.

[1] Andy Serkis has complimented High Frame Rate filmmaking and described is “an actor’s medium.” And directors like Ang Lee refuse to shoot 24P films because they cannot record every detail of an actor’s performance or deliver the sophisticated image he wants to see.   Future-24 performances will be captured at very high frame rates. The data can be then used to ensure that subtle micro expressions can be subliminally windowed into an F-24  master, giving the naturalness of direct vision but with the filmic aesthetic s valued by the industry.

[2] Charlie Chaplin became incensed by takes being lost to film breakages, scratches, dust and processing flaws during the making of his film, “The Circus.” So he mounted two cameras close together and had two experienced operators hand-crank all of his shots. This made it highly unlikely he would ever suffer a simultaneous technical failure ever again. It also reduced the risk of his single negative lost to fire or decay.  Coincidentally, the additional camera also allowed his films to be watched in 3D for the first time, almost 50 years after his death.

Why 2D to 3D Conversion is a Disaster (and why we need Future-24)

Almost every converted film I have ever seen has horrific flaws that are so obvious I find it astonishing they are not a major topic of discussion amongst cinematographers. Tiny differences between lenses can be discussed and debated ad infinitum, but horrifying distortions of a leading actor or actress in 3D conversions pass without comment anywhere. Anywhere that is except for this extract below from CML discussions and off-list comments after th original CML posts over 4 years ago.

   The first flaw is unnatural giantism & dwarfism seen in almost every converted 3D film. The mathematics of perspective and size constancy make it quite clear that objects must get smaller as they move away from us. But in a dynamic two-shot where actors are walking around each other (or one is moving away), the furthest actor clearly gets bigger in a converted movie. I have seen this many times in the last five years and it is still present in the largest budget movies, such as The Last Jedi or the recent Godzilla movie. Such shocking incompetence should never be seen in any film, yet it can be seen in the highest budget movies.

These scenes usually include over-the-shoulder shot dialogue. So as well as the bizarre sizing issue, we also see mismatched eye lines. Actors are supposed to look into each other’s eyes when they talk. But conversion makes it far more likely that the comically oversized background head will appear to be reading dialogue from a board way to the left or right of where they should be. But close one eye and the illusion disappears as the original eye lines are restored. I have never seen this effect in 2D or native 3D, and I suspect it may be a flaw only possible in converted 3D movies.

The last artefact is perhaps the most astonishing. In the very early days of anamorphic lenses, cinematographers and actors were horrified by an effect called “anamorphic mumps.” This is where the faces of the most beautiful people in the world were randomly distorted or fattened by mismatches between the prime anamorphic and its earliest projection lenses. The only solution at the time was to avoid close-ups and frame the talent way off-centre. Otherwise, there was a serious risk of actresses incandescent with rage and cinematographers inadvertently causing very expensive reshoots.  Well now we have 3D conversion “mumps.” Those who saw the last Star Wars conversion would see the disturbingly rubberized face of Daisy Ridley continually distorted. It has a different shape in every close-up! Really!! Closing one eye restores her face back to how it looked in 2D (before it was butchered). Elizabeth Olsen’s entire head was distorted into the shape of an american football (lengthwise!!!) in most of her close-ups in her Godzilla film. As with Daisy Ridley, binocular viewing of it will confirm the world is going mad, and that someone is taking the piss by asking people to pay for a premium ticket to see distortions that were eradicated from 2D movies over 50 years ago.  Otherwise, converted movies are just fab. 

Visits: 357