转自:http://www.sjbrown.co.uk/2004/05/14/gamma-correct-rendering/

With consumer-level hardware now capable of rendering high dynamic range image data, the days of the 8-bit sRGB framebuffer are numbered. Programmers of next-generation graphics devices are able to model lighting systems to high accuracy, then tone-map these values into a displayable range for conventional 8-bit sRGB equipment, such as PC monitors.

The graphics pipeline from source art to final output is complicated, and requires the programmer to work in several different colour spaces along the way. In this article I’ll give a brief overview of colour spaces, and then detail a commonly overlooked area in the texture pipeline where gamma is important.

The sRGB Standard

The sRGB colour space is based on the monitor characteristics expected in a dimly lit office, and has been standardised by the IEC (as IEC 61966-1-2). This colour space has been widely adopted by the industry, and is used universally for CRT, LCD and projector displays. Modern 8-bit image file formats (such as JPEG 2000 or PNG) default to the sRGB colour space.

A value in the sRGB colour space is a floating-point triple, with each value between 0.0 and 1.0. Values outside of this range are clipped. An sRGB colour from this [0, 1] interval is commonly encoded as an 8-bit unsigned integer between 0 and 255.

The pivotal fact to remember about sRGB is that it is non-linear. It roughly follows the curve y = x 2.2, although the actual standard curve is slightly more complicated (and will be listed at the end of this article). A graph of sRGB against gamma 2.2 looks as follows:

A comparison of sRGB and the gamma 2.2 curve.

This mapping has the nice property that more resolution is given to low-luminance RGB values, which fits the human visual model well.

The Gamma Function As An Approximation

As can be seen by the above graph, the sRGB standard is very close to the gamma 2.2 curve. For this reason, the full sRGB conversion function is often approximated with the much simpler gamma function.

Please note that the value associated with the word gamma is the power value used in the function y = x p. Unfortunately gamma is often associated with brightness, which is not exactly what it is doing. The full [0, 1] interval is always mapped back onto the full [0, 1] interval.

What Maths Work In This Colour Space?

In general your lighting pipeline should be done in linear space, so that all lighting is accumulated linearly. This is the approach taken in many next-generation engines, and is the only way to ensure that you are being physically correct.

However, assuming that the gamma function approximation is good enough, you can still perform modulateoperations. In this case we have some constant A that we wish to modulate our sRGB source data x with, and store the result in sRGB as y. In linear space this would be written as:

y 2.2 = A x 2.2 = ( A 1/2.2 x ) 2.2

Since we are working only in the [0, 1] interval, we can remove the power from both sides and work in the sRGB space itself. In which case:

y = A 1/2.2 x

So if we convert our constants into sRGB, then modulate operations can still be performed. However, there are only very few operations that work this way. Additive operations (which are used in additive lighting models, or for alpha-blending) cannot be reformulated to work in a gamma 2.2 space, simply because the space is non-linear. If you wish to have a correct additive lighting model, then you have to work in a linear space, which will mean that you need a higher-precision framebuffer to at least match the low-luminance granularity of sRGB.

An sRGB Example: Mip-Mapping

If you bilinearly filter an image in the sRGB colour space, you always end up with a filtered colour that is darker than the correct result. The amount of error grows as the range of the input colours grows. Here’s a worst-case example of a black-and-white grid filtered with and without colour space conversion:

50% linear (i.e. respecting gamma) vs black/white alternate pixels vs 50% sRGB (i.e. ignoring gamma)

The center image contains alternating black and white pixels. The left and right images were generated by down-sampling the image and then up-scaling back to the original size. The left image was down-sampled in linear space, the right image was down-sampled in sRGB directly.

On a correctly-calibrated monitor in standard lighting conditions, the left-hand image and the center image should appear the same overall brightness. This is because the linear-space average was 50% grey, which gets mapped to a value of 186 in sRGB. The right-hand image contains the sRGB value of 128, but this is only a 21.4% grey in linear space, so should appear much darker.

Most game textures do not have anywhere near as high luminance variation, and as such the errors introduced by filtering sRGB colours directly are nowhere near as harsh as in this example. However, any high-variation data (such as precomputed lightmaps) should be filtered as linear light values before each mip level gets saved in sRGB.

Conclusion

The current generation of graphics hardware can move data from sRGB into linear space during a texture read instruction, and assuming that you make use of higher-precision frame buffers, it is cheap to transform a final render back into sRGB for use on the display device. In tools we are concentrating more on correctness, so the extra conversion time necessary to move between colour spaces can be ignored.

Since the next generation of rendering engines will be expected to get all the subtleties of complex lighting equations correctly simulated, it is essential to be colour-space aware throughout your art pipeline and rendering system. Hopefully this article has highlighted areas where colour spaces are important – the next section contains some of the conversion equations that I’ve used in both art tools and rendering code.

Useful Data

sRGB to linear RGB: rgb (sRGB), RGB (linear RGB)

R = r / 12.92 for r <= 0.04045
( (r + 0.055)/1.055 ) 2.4 for r > 0.04045
 
G = g / 12.92 for g <= 0.04045
( (g + 0.055)/1.055 ) 2.4 for g > 0.04045
 
B = b / 12.92 for b <= 0.04045
( (b + 0.055)/1.055 ) 2.4 for b > 0.04045

This is commonly approximated as X = x 2.2 for all channels.

linear RGB to sRGB: RGB (linear RGB), rgb (sRGB)

r = 12.92 R for R <= 0.0031308
1.055 R 1.0 / 2.4 – 0.055 for R > 0.0031308
 
g = 12.92 G for G <= 0.0031308
1.055 G 1.0 / 2.4 – 0.055 for G > 0.0031308
 
b = 12.92 B for B <= 0.0031308
1.055 B 1.0 / 2.4 – 0.055 for B > 0.0031308

This is commonly approximated as x = X 1/2.2 for all channels.

XYZ to linear RGB: [D65 white point]

R = 3.2406 X - 1.5372 Y - 0.4986 Z
G = -0.9689 X + 1.8758 Y + 0.0416 Z
B = 0.0557 X - 0.2040 Y + 1.0570 Z

linear RGB to XYZ: [D65 white point]

X = 0.4124 R + 0.3576 G + 0.1805 B
Y = 0.2126 R + 0.7152 G + 0.0722 B
Z = 0.0193 R + 0.1192 G + 0.9505 B

Written by Simon Brown

May 14th, 2004 at 8:00 pm

Posted in Rendering

sRGB Color Space相关推荐

  1. Gamma、Linear、sRGB 和Unity Color Space,你真懂了吗?

    "为什么我渲染出来的场景,总是感觉和真实世界不像呢?" 游戏从业者或多或少都听过Linear.Gamma.sRGB和伽马校正这些术语,互联网上也有很多科普的资料,但是它们似乎又都没 ...

  2. Unity Gamma Linear Color Space

    转载文章,出自http://www.manew.com/thread-105872-1-1.html,作者 alphatt Gamma & Linear Color Space 一.真实?感觉 ...

  3. 关于Color Space是Gamma还是Linear的一些问题

    这个问题源自于我们的UI发现自己在FGUI下制作的东西,在Unity中显示的效果不对.例如90%透明度的黑底图片导出到Unity中的效果非常的透,可能只有70%左右的效果. 然后我们绞尽脑汁的找了半天 ...

  4. Gamma Linear Color Space

    http://www.manew.com/forum.php?mod=viewthread&tid=105872 一.真实?感觉?    1.你相信你的眼睛吗 (蓝黑or白金?) (A和B的颜 ...

  5. 彩色空间(Color Space)

    背景 学习openCV-Python Tutorial,在Image Processing in OpenCV这一节里有提到彩色空间的转换,结合其他的一些资料对彩色空间(Color Space),彩色 ...

  6. java.lang.IllegalArgumentException: Numbers of source Raster bands and source color space components

    项目在文件压缩的时候报错如下: Exception in thread "main" java.lang.IllegalArgumentException: Numbers of ...

  7. Color Space: Ycc

    在进行图像扫描时,有一种重要的扫描输入设备PhotoCd,由于PhotoCd在存储图像的时候要经过一种模式压缩,所以PhotoCd采用了Ycc颜色空间,此空间将亮度作由它的主要组件,具有两个单独的颜色 ...

  8. color space

    r本来在看frostbite在hdr output上面的文章,里面介绍了一些我比较模糊的概念,所以先补充下这个部分. 视觉系统和chroma subsampling 这个就是大家比较常见的人类视觉的特 ...

  9. DM365 color space

    YUV的几种格式 420P:420P数据的存放方式一般是先存放Y,然后存放U,最后存放V的数据,每一个像素使用12bits(1.5BYTE)保存. 422P:422P数据的存放方式也是先存放Y,然后存 ...

最新文章

  1. CentOS下命令行和桌面模式的切换方法
  2. MySQL之concat、concat_ws以及group_concat的使用
  3. 【Git】Git 分支管理 ( 克隆远程分支 | 克隆 master 分支 git clone | 查看远程分支 git branch -a | 克隆远程分支 git checkout -b )
  4. 【Python自动化运维之路Day2】
  5. VMware中安装Centos 7
  6. 杀不死的人狼——我读《人月神话》(四)
  7. bool型数组python_Python bool()
  8. 魔方机器人之下位机编程----串口接收数据并解析
  9. Linux系统发布ASP.NET项目
  10. 计算机硬盘与格式化,什么是磁盘格式化 格式化与快速格式化区别 - 驱动管家...
  11. [观点]国难当头,为什么赵王还要杀掉李牧?因为人性最爱反噬没格局的人
  12. 365地图java_中国气候区划在线地图(1:3200万)
  13. Catia 二次开发 资料
  14. iwork8 Android86,酷比魔方(CUBE)IWORK8 U80GT平板电脑整体评测-ZOL中关村在线
  15. 两个圆柱相交的体积 UVALive 5096 Volume
  16. shell脚本编程学习笔记6(xdl)——字符串截取命令
  17. kettle_Day4_Hop的作用
  18. 摄像模组中光学相关知识(二)
  19. C语言之volatile用法(二十一),2021最新Android面试笔试题目分享
  20. 软件开发领域,英文function翻译为函数并不准确

热门文章

  1. 知乎网站胡说八道,误人子弟!
  2. UE4 更改工程文件名字的方法
  3. django框架——模型层(下)
  4. 线性代数 范数_计算数据科学的向量p范数线性代数iv
  5. ADAMS中转动整个模型
  6. php model module,Yii2用Gii自动生成Module+Model+CRUD
  7. 我的世界java村民繁殖_教程/村民养殖 - Minecraft Wiki,最详细的官方我的世界百科...
  8. 什么是 Web 3.0?|互联网的下一波浪潮解释
  9. signature=32d532a97f37c02b1149992578cf4af9,~(11)C-CFT PET功能显像Parkin基因缺陷少年型帕金森病患者脑多巴胺转运体...
  10. 烧脑难题:诡异的世界9大悖论