In essence, this is the study of the way watching films affects the human mind. The term was invented in a paper by Uri Hasson in 2008, which showed that some films exert considerable control over brain activity and eye movements but that this depended — as one would expect — on their content, editing, and directing style. The paper suggested that this work, using magnetic resonance imaging, could lead to a fusion between film studies and cognitive neuroscience and suggested the name neurocinematics for it.
In February 2010, research by Professor James Cutting and his team at Cornell University was widely reported. They measured the length of every shot in 150 high-grossing Hollywood films released between 1935 and 2005. The more recent the film, the more likely it is that the pattern of duration of shots matches the attention span of its audience. This pattern, known as the 1/f rule or pink noise rule, had been deduced in earlier studies of volunteers working on tasks. It seems that, through experience, film editors have intuited the formula.
The term has appeared a number of times online but only rarely in print. As yet, it’s a niche formation and may not survive.
Such results have given rise to the term neurocinematics, which measures the level of experiential control that popular media have on people’s brains.
The National, 18 Jan. 2009.
Given the gargantuan cost of blockbusters like Avatar, it wouldn’t be surprising if Hollywood’s next step is to use brain scanners to get inside the heads of movie-goers. It’s impossible to translate brain activity into ‘Oscar buzz’, though, so the potential of ‘neurocinematics’ is unproven.
New Scientist, 20 Feb. 2010.