Putting human vision models into computer video display

Putting human vision models into computer video display

Image processing technology has achieved remarkable breakthroughs, with more vivid colors, richer detail and higher definition images. This adds up to better resolution and a broader range of available colors at lower cost per pixel. But despite these stunning advances in visual display, it has been impossible to accurately reproduce what the human eye would see when viewing the scene directly.

No matter how advanced the technology, there has always been a difference between seeing something on the screen, and seeing it in real life. The human eye has an advantage in perceiving input, due to its ability to compensate on the fly for differences in lighting conditions both in static and mobile viewing.

There's no doubt that the future of television and video display rests in higher definition. Most recently, 4K TV, also known as Ultra HD, offers up dramatic improvements with twice the picture resolution of a standard 1080p full HD television.

What's next though, isn't just adding more pixels to the display and supporting larger color gamuts. The most dramatic improvement is in an entirely different approach that begins with a study of how the human eye organically perceives and processes color.

The human eye isn't just RGB

The original color standards defined a limited range of colors, by creating different intensities of red, green and blue (RGB) light emitted from rare earth phosphors grouped into sets of three. This system has persisted over time, but it does not allow for all possible colors, since it does not allow for negative amounts of a color to be used.

Nonetheless, it worked well, and has been extended a number of times. The most common standard continues to be sRGB, although some new color emitters in display devices are capable of creating more colors than are defined by the standard.

It's also important to note that the move from analog to digital displays came at a cost. In the real world, human eyes are not digital (unless you are a character from Star Trek). The natural color spectrum is analog, and every color in the frequency range of visible light is possible.

Digital displays impose an artificial limitation on the color gamut, because they have to rely on discrete digital values. Digital displays take the entire display as a single unit – only using crude adjustments of brightness that are applied across the board, which leads to a perception of some colors as being simply "wrong" in certain lighting environments.

The human eye adjusts how it sees colors based on brightness, and color of the viewing light. Technological displays, unlike the human eye, do not differentiate between regions that should be adjusted (such as shadows) and those that should not.

Also, digital standards do not take ambient light into account, and as a result, a display in an environment in which there is bright light, will look less colorful than it would in a dimly lit theater. The human eye does something that technology has until now been unable to do – and that is to adjust perception of colors based on the level of ambient light.

Putting human vision technology onto the digital screen

Applying the physical models of human vision to the computer or television display will come closer to natural vision than any other image technology on the market. This new era of real-time color processing, first developed by Entertainment Experience for its eeColor software application, in partnership with Rochester Institute of Technology, is now a reality. The new model displays vibrancy that even in Ultra HD, has never before been possible.

The technology applies real-time light sensors to automatically restore any quality that might be lost due to subpar lighting or bright sunlight, making it the first display technology suitable for equally vibrant displays in any lighting environment.

In addition to the ability to dynamically adjust for light, the new processing technology uses an approach with multidimensional tables, in order to map the true image colors as seen by the human eye. This gives total color control for the first time.

The evolution of color reproduction

The problem with video display is that it fundamentally differs from how the brain works. The way digital display works hasn't changed a whole lot since the early days of TV. We have four colors, and added white and other attempts to make the standard look better. The trouble is, it looks awful if you take relatively poor processing capability in the video processor and display it on a big screen.

But no matter what you do to the display, the color information in the image data doesn't correspond to the way your brain expects color to work. You just get a better-looking bad picture. The problem is, the eye is naturally adaptive. That's why you can see a candle on a dark night ten miles away. The way the color image processing works, it isn't visible at all.

To achieve a video display that offers the same quality as the human eye, one must first start out by understanding what science calls memory colors. Examples include blue sky, sky tones, and reference colors.

If you can make those colors look right, our eye system, when it looks at the display, will perceive that there is quality of color in the image that is much closer to what is actually there. To do that, you have to understand how to map the image to different pieces of the color space.

In current technology, adapting the display depends on adjusting saturation and contrast to compensate. The new Entertainment Experience technology does not depend on those adjustments. Instead, they control the area in the color gamut of the display that creates the color one's eye expects to see, controlling for ambient light.

Furthermore, the approach is scalable for different screen sizes. It scales to the size of the display at any height level. Scaling down, on the smartphone, the display characteristics can be adjusted, so you can look at it and see it clearly even in bright sunlight.

Unfortunately, before eeColor, standards have not kept up with the promise of the hardware. The color gamut for HDTV was bound by what can be created from a scanning electron beam and phosphor.

Today we do have wavelength-tunable lasers that can create a larger color gamut. It is possible to get to 85% of the theoretical maximum gamut with current LED laser technology, but the standard remains at about 45%.

We throw away a lot of color information in going from what the camera saw, to what you see when it's displayed. eeColor puts back much of the information that was thrown away, by processing what was retained in a dynamic fashion.

The eeColor technology is a software plug-in. The engine references a set of lookup tables that can be very specific or completely generic to a class of device, which gives it a high degree of flexibility to be adapted to almost any display hardware.

YouTube : https://www.youtube.com/watch?v=rmGhq-jPn5c

The result is simply stunning – and represents the future of display technology on every device from smart phone displays to Times Square billboards.

  • John Parkinson is Chief Executive Officer of Entertainment Experience LLC.

Posted on March 31, 2015 in TechRadar

Share the Story

About the Author

Back to Top
Book an Appointment

Book an Appointment with WMTDS - Click Here