Bicyclism: Art Of Riding Mac OS
Bicyclism: Art Of Riding Mac OS
Big cats are out. Big waves are in. I'm referring, of course, to Mavericks OS X, the new operating system for Mac computers. It's Apple 's 10th OS X operating system and the first one in many.
Table of contents
- What is gamma and why do we need it?
- Effects of gamma-incorrectness
- References & further reading
- Having the same problem here with latest iOS 12.1.1 and Guided Access. Trying to use ShopKeep app as registers and when Guided Access is enabled Bluetooth seems to be wonky and things disconnect after so long and screen used to go black, but now just like shows the background of the homescreen it seems.
- The game was released on April 30, 1995, for MS-DOS, Microsoft Windows and Mac OS. It was Schafer's first game as project lead and head writer and designer, after having worked on other LucasArts titles including The Secret of Monkey Island (1990), Monkey Island 2: LeChuck's Revenge (1991), and Day of the Tentacle (1993).
- If you're still self-isolating, you probably think you've gone through all the ways to keep those hands from going idle. You've been texting friends, maybe doing some bike riding, and (if you're.
A short quiz
If you have ever written, or are planning to write, any kind of code thatdeals with image processing, you should complete the below quiz. If you have answeredone or more questions with a yes, there’s a high chance that your code isdoing the wrong thing and will produce incorrect results. This might not beimmediately obvious to you because these issues can be subtle and they’reeasier to spot in some problem domains than in others.
So here’s the quiz:
- I don’t know what gamma correction is (duh!)
- Gamma is a relic from the CRT display era; now that almost everyone usesLCDs, it’s safe to ignore it.
- Gamma is only relevant for graphics professionals working in the printindustry where accurate colour reproduction is of greatimportance—for general image processing, it’s safe to ignore it.
- I’m a game developer, I don’t need to know about gamma.
- The graphics libraries of my operating system handle gamma correctly.1
- The popular graphics library <insert name here> I’m using handles gamma correctly.
- Pixels with RGB values of (128, 128, 128) emit about half as much light aspixels with RGB values of (255, 255, 255).
- It is okay to just load pixel data from a popular image format (JPEG, PNG,GIF etc.) into a buffer using some random library and run image processingalgorithms on the raw data directly.
Don’t feel bad if you have answered most with a yes! I would have givena yes to most of these questions a week ago myself too. Somehow, the topicof gamma is just under most computer users’ radar (including programmerswriting commercial graphics software!), to the extent that most graphicslibraries, image viewers, photo editors and drawing software of today stilldon’t get gamma right and produce incorrect results.
So keep on reading, and by the end of this article you’ll be moreknowledgeable about gamma than the vast majority of programmers!
The arcane art of gamma-correctness
Given that vision is arguably the most important sensory input channel forhuman-computer interaction, it is quite surprising that gamma correction isone of the least talked about subjects among programmers and it’s mentioned intechnical literature rather infrequently, including computer graphics texts.The fact that most computer graphics textbooks don’t explicitly mention theimportance of correct gamma handling, or discuss it in practical terms, doesnot help matters at all (my CG textbook fromuni falls squarely into thiscategory, I’ve just checked). Some books mention gamma correction in passingin somewhat vague and abstract terms, but then provide neither concretereal-world examples on how to do it properly, nor explain what theimplications of not doing it properly are, nor show image examples ofincorrect gamma handling.
I came across the need for correct gamma handling during writing my raytracer and I had to admit that my understanding of thetopic was rather superficial and incomplete. So I had spent a few days readingup on it online, but it turned out that many articles about gamma are not muchhelp either, as many of them are too abstract and confusing, some contain toomany interesting but otherwise irrelevant details, and then some others lackimage examples or are just simply incorrect or hard to understand. Gamma isnot a terribly difficult concept to begin with, but for some mysterious reasonit’s not that trivial to find articles on it that are correct, complete andexplain the topic in a clear language.
What is gamma and why do we need it?
Alright, so this is my attempt to offer a comprehensive explanation of gamma,focusing just on the most important aspects and assuming no prior knowledge ofit.
The image examples in this article assume that you are viewing this web page ina modern browser on a computer monitor (CRT or LCD, doesn’t matter). Tabletsand phones are generally quite inaccurate compared to monitors, so try toavoid those. You should be viewing the images in a dimly lit room, so nodirect lights or flare on your screen please.
Light emission vs perceptual brightness
Believe it or not, the difference of light energy emission between any twoneighbouring vertical bars in the below image is a constant. In other words,the amount of light energy emitted by your screen increases by a constantamount from bar to bar, left to right.
Now consider the following image:
On which image does the gradation appear more even? It’s the second one! Butwhy is that so? We have just established that in the first image the bars areevenly (linearly) spaced in terms of emitted light intensity between thedarkest black and brightest white your monitor is capable of reproducing. Butwhy don’t we see that as a nice even gradation from black to white then? Andwhat is being displayed on the second image that we perceive as a lineargradation?
The answer lies in the response of the human eye to light intensity, which isnon-linear. One the first image, the difference between the nominal lightintensity of any two neighbouring bars is constant:
$$Δ_{linear} = I_n-I_{n-1}$$
On the second image, however, this difference is not constant but changes frombar to bar; it follows a power law relationship, to be exact. All humansensory perception follows a similar power lawrelationship in terms ofthe magnitude of stimulus and its perceived intensity.
Because of this, we say that there is a power law relationship betweennominal physical light intensity and perceptual brightness.
Physical vs perceptual linearity
Let’s say we wanted to store a representation of the following real-worldobject as an image file on the computer (let’s pretend for a momentthat perfect greyscale gradients exist in the real world, okay?) Here’s howthe “real world object” looks like:
Now, let’s pretend that we can only store 5-bit greyscale images on thisparticular computer system, which gives us 32 distinct shades of grey rangingfrom absolute black to absolute white. Also, on this computer, greyscalevalues are proportional with their corresponding physical light intensities,which will result in a 32-element greyscale as shown on Figure 1. We can saythat this greyscale is linear in terms of light emission betweensuccessive values.
If we encoded our smooth gradient using only these 32 grey values, we would getsomething like this (let’s just ignore dither for now to keep things simple):
Well, the transitions are rather abrupt, especially on the left side, becausewe only had 32 grey values to work with. If we squint a little, it’s easy toconvince ourselves that this is a more or less “accurate” representation ofthe smooth gradient, as far as our limited bit-depth allows it. But note howthe steps are much larger on the left side than on the right—this is becausewe are using a greyscale that is linear in terms of emitted lightintensity, but as we have mentioned before, our eyes don’t perceive lightintensity in a linear way!
This observation has some interesting implications. The error between theoriginal and the 5-bit encoded version is uneven across the image; it’s muchlarger for dark values than for light ones. In other words, we are losingrepresentational precision for dark values and are using relatively too muchprecision for lighter shades. Clearly, we’d be better off choosinga different set of 32 greys for our limited palette of shades that would makethis error evenly distributed across the whole range, so both dark and lightshades would be represented with the same precision. If we encoded ouroriginal image with such a greyscale that is perceptually linear, butconsequently non-linear in terms of emitted light intensity, and thatnon-linearity would match that of the human vision, we’d get the exact samegreyscale image we have already seen in Figure 2:
The non-linearity we’re talking about here is the power law relationshipwe mentioned before, and the non-linear transformation we need to apply to ourphysically linear greyscale values to transform them into perceptuallylinear values is called gamma correction.
Efficient image encoding
Why is the all the above important? Colour data in so-called “true colour” or“24-bit” bitmap images is stored as three 8-bit integers per pixel. With8 bits, 256 distinct intensity levels can be represented, and if the spacingof these levels were physically linear, we would be losing a lot of precisionon dark shades while being unnecessarily precise on light shades (relativelyspeaking), as shown above.
Clearly, this is not ideal. One solution would be to simply keep using thephysically linear scale and increase the bit depth per channel to 16 (ormore). This would double the storage requirements (or worse), which was notan option when most common image formats were invented. Therefore, a differentapproach was taken. The idea was to let the 256 distinct levels representintensity values on a perceptually linear scale instead, in which case thevast majority of images could be adequately represented on just 8 bits percolour channel.
The transformation used to represent the physically linear intensity dataeither generated synthetically via an algorithm or captured by a linear device(such as a CMOS of a digital camera or a scanner) with the discrete values ofthe perceptually linear scale is called gamma encoding.
The 24-bit RGB colourmodel(RGB24) used on virtually all consumer level electronic devices uses 8-bitgamma encodedvalues perchannel to represent light intensities. If you recall what we discussedearlier, this means that pixels with RGB(128, 128, 128) will not emitapproximately 50% the light energy of pixels with RGB(255, 255, 255), but onlyabout 22%! That makes perfect sense! Because of the non-linear nature of humanvision, a light source needs to be attenuated to about 22% of its originallight intensity to appear half as bright to humans. RGB(128, 128, 128)appears to be half as bright as RGB(255, 255, 255) to us! If you find thisconfusing, reflect a bit on it because it’s crucial to have a solidunderstanding of what has been discussed so far (trust me, it will only getmore confusing).
Of course, gamma encoding is always done with the assumption that the image isultimately meant to be viewed by humans on computer screens. In some way, youcan think of it as a lossy MP3 like compression but for images. For otherpurposes (e.g. scientific analysis or images meant for furtherpost-processing), using floats and sticking with the linear scale is oftena much better choice, as we’ll later see.
The gamma transfer function
The process of converting values from linear space to gamma space is calledgamma encoding (or gamma compression), and the reverse gammadecoding (or gamma expansion).
The formulas for these two operations are very simple, we only need to use theaforementioned power law function:
$$V_{encoded} = V_{linear} ^ {1/γ}$$
$$V_{linear} = V_{encoded} ^ {γ}$$
The standard gamma (γ) value to use in computer display systems is2.2. The main reason for this is because a gamma of 2.2 approximatelymatches the power law sensitivity of human vision. Aztec mac os. The exact value that shouldbe used varies from person to person and also depends on the lightingconditions and other factors, but a standard value had to be chosen and 2.2was good enough. Don’t be too hung up on this.
Now, a very important point that many texts fail to mention is that the inputvalues have to be in the 0 to 1 range and the output will be consequentlymapped to the same range too. From this follows the slightly counter-productivefact that gamma values between 0 and 1 are used for encoding(compression) and greater than 1 for decoding (expansion).The below charts demonstrate the gamma transfer functions for encoding anddecoding, plus the trivial linear gamma (γ=1.0) case:
We have only seen greyscale examples so far, but there’s nothing special aboutRGB images—we just simply need to encode or decode each colour channelindividually using the same method.
Gamma vs sRGB
sRGB is a colour space that is thede-facto standard for consumer electronic devices nowadays, includingmonitors, digital cameras, scanners, printers and handheld devices. It isalso the standard colour space for images on the Internet.
The sRGB specification defines what gamma to use for encoding and decodingsRGB images (among other things such as colour gamut, but these are notrelevant to our current discussion). sRGB gamma is very close to a standardgamma of 2.2, but it has a short linear segment in the very dark range toavoid a slope of infinity at zero (this is more convenient in numericcalculations). The formulas to convert from linear to sRGB and back can befoundhere.
You don’t actually need to understand all these finer details; the importantthing to know is that in 99% of the cases you’ll want to use sRGB instead ofplain gamma. The reason for this is that all graphics cards have hardware sRGBsupport since 2005 or so, so decoding and encoding is virtually for free mostof the time. The native colour space of your monitor is most likely sRGB(unless it’s a professional monitor for graphics, photo or video work) so ifyou just chuck an sRGB encoded pixel data into the framebuffer, the resultingimage will look correct on the screen (given the monitor is properlycalibrated). Popular image formats such as JPEG and PNG can store colour spaceinformation, but very often images don’t contain such data, in which casevirtually all image viewers and browsers will interpret them as sRGB byconvention.
Gamma calibration
We have talked about gamma encoding and decoding so far, but what is gammacalibration then? I found this bit slightly confusing too, so let me clearit up.
As mentioned, 99% of all monitors today use the sRGB colour space natively,but due to manufacturing inaccuracies most monitors would benefit from someadditional gamma calibration to achieve the best results. Now, if you nevercalibrated your monitor, that doesn’t mean that it will not use gamma! That issimply impossible, most CRT and LCD displays in the past and present have beendesigned and manufactured to operate in sRGB.
Think of gamma calibration as fine tuning. Your monitor will always operate insRGB, but by calibrating it (either in the video card driver or on the OSlevel) the monitor’s gamma transfer curve will more closely match the idealgamma transfer function we discussed earlier. Also, years ago it was possibleto shoot yourself in the foot in various creative ways by applying multiplegamma correction stages in the graphics pipeline (e.g. video card, OS andapplication level), but fortunately this is handled more intelligentlynowadays. For example, on my Windows 7 box, if I turn on gamma calibration inthe NVIDIA Control Panel then the OS level calibration will be disabled andvice versa.
Processing gamma-encoded images
So, if virtually the whole world defaults to sRGB, what is exactly theproblem? If our camera writes sRGB JPEG files, we can just decode the JPEGdata, copy it into the framebuffer of the graphics card and the image would bedisplayed correctly on our sRGB LCD monitor (where “correctly” means it wouldmore or less accurately represent the photographed real-world scene).
The problem will happen in the moment we start running any image processingalgorithms on our sRGB pixel buffer directly. Remember, gamma encoding isa non-linear transformation and sRGB encoding is basically just a funky way ofdoing gamma encoding of around γ=1/2.2. Virtually all image processingalgorithms you will find in any computer graphics text will assume pixel datawith linearly encoded light intensities, which means that feeding thesealgorithms with sRGB encoded data will render the results subtly—or insome cases quite obviously—wrong! This includes resizing, blurring,compositing, interpolating between pixel values, antialiasing and so on, justto name the most common operations!
Effects of gamma-incorrectness
Alright, enough theory talk, show me how these errors actually look like!That’s exactly what we’ll do in this section; we will examine the most commonscenarios when running image processing algorithms directly on sRGB data wouldmanifest in incorrect results. Apart from illustrative purposes, theseexamples are also useful for spotting gamma-incorrect behaviour or bugs indrawing programs and image processing libraries.
It must be noted that I have chosen examples that clearly demonstrate theproblems with gamma-incorrectness. In most cases, the issues are the mostobvious when using vivid, saturated colours. With more muted colours, thedifferences might be less noticeable or even negligible in some cases.However, the errors are always present, and image processing programs should workcorrectly for all possible inputs, not just okayish for 65.23% of all possibleimages… Also, in the area of physically based rendering gamma correctness isan absolute must, as we’ll see.
Gradients
The image below shows the difference between gradients calculated in linear(top gradient) and sRGB space (bottom gradient). Note how direct interpolationon the sRGB values yields much darker and sometimes more saturated lookingimages.
Just going by the looks, one might prefer the look of the sRGB-space versions,especially for the last two. However, that’s not how light would behavein the real world (imagine two coloured light sources illuminating a whitewall; the colours would mix as in the linear-space case).
Almost everybody does this the wrong way: CSS gradients and transitions arewrong (see thisthread fordetails), Photoshop is wrong (as of version CS6) and there’s not even an optionto fix it.
Two drawing programs that got this (and gamma-correctness in general) rightare Krita and Pixelmator.SVG also let’s the user tospecifywhether to use linear or sRGB-space interpolations for gradients, compositingand animations.
Colour blending
Drawing with soft brushes in gamma-incorrect drawing programs can result inweird darkish transition bands with certain vivid colour combinations.This is really a variation of the gradient problem if you think about it (thetransition band of a soft brush is nothing else than a small gradient).
Some random people claimed on the Adobe forums that by doing this Photoshop isreally mimicking how mixing paints would work in real life. Well, no, ithas nothing to do with that. It’s just the result of naive programming to workdirectly on the sRGB pixel data and now we’re stuck with that as the defaultlegacy behaviour.
Alpha blending / compositing
As another variation on colour blending, let’s see how alpha blending holdsup. We’ll examine some coloured rectangles first. As expected, thegamma-correct image on the left mimics how light would behave in real life,while the sRGB space blending on the right exhibits some weird hue andbrightness shifts.
The appearance of false colours is also noticeable when blending two photostogether. On the gamma-correct image on the left, the skin tones and the redsand yellows are preserved but faded into the bluish image in a natural way,while on the right image there’s a noticeable overall greenish cast. Again,this might be an effect you like, but it’s not how accurate alphacompositing should work.
Image resizing
These examples will only work if your browser doesn’t do any rescaling on theimages below. Also, note that screens of mobile devices are more inaccuratewith regards to gamma than regular monitors, so for best results try to viewthis on a desktop computer.
The image below contains a simple black and white checkerboard pixel pattern(100% zoom on the left, 400% zoom on the right). The black pixels areRGB(0,0,0), the minimum light intensity your monitor is capable of producing,and the white ones RGB(255,255,255), which is the maximum intensity. Now, ifyou squint a little, your eyes will blur (average) the light coming from theimage, so you will see a grey that’s halfway in intensity between absoluteblack and white (therefore it’s referred to as 50% grey).
From this follows that if we resized the image by 50%, a similar averagingprocess should happen, but now algorithmically on the pixel data. We expectto get a solid rectangle filled with the same 50% grey that we saw when wesquinted.
Let’s try it out! On the image below, A is the checkerboard pattern, B theresult of resizing the pattern by 50% directly in sRGB-space (using bicubicinterpolation), and C the resizing it in linear space, then converted tosRGB.
Unsurprisingly, C gives the correct result, but the shade of grey might notbe an exact match for the blurred checkerboard pattern on your monitor ifit’s not properly gamma-calibrated. Even the math shows this clearly: a 50%grey pixel that emits half as much light as a white pixel should have a RGBvalue of around (186,186,186), gamma-encoded:
$$0.5^{1/2.2} ≈ 0.72974$$$$0.72974·255 = 186$$
(Don’t worry that on the image the 50% grey is RGB(187,187,187). That smalldifference is because the image is sRGB-encoded, but I used the much simplergamma formula for my calculation here.)
Gamma-incorrect resizing can also result in weird hue shifts on some images.For more details, read Eric Brasseur’s excellentarticle on the matter.
Antialiasing
I guess it’s no surprise at this point that antialiasing is no exception whenit comes to gamma-correctness. Antialiasing in γ=2.2 space results in overlydark “smoothing pixels” (right image); the text appears too heavy, almost asif it was bold. Running the algorithm in linear space produces much betterresults (left image), although in this case the font looks a bit too thin.Interestingly, Photoshop antialiases text using γ=1.42 by default, and thisindeed seems to yield the best looking results (middle image). The reason forthis is that most fonts have been designed for gamma-incorrect fontrasterizers, hence if you use linear space (correctly), then the fonts willlook thinner than they should.
Physically-based rendering
If there’s a single area where gamma-correctness is an absolute must, that’sphysically-based rendering (PBR). Uses of adobe photoshop. To obtain realistic looking results, gammashould be handled correctly throughout the whole graphics pipeline. There’sso many ways to screw this up, but these are the two most common ways:
- Doing the calculations in linear space but failing to convert the finalimage to sRGB and then “tweaking” various material and lighting parametersto compensate.
- Failing to convert sRGB texture images to linear space (or set the sRGB flagwhen hardware acceleration is used).
These two basic errors are then usually combined in various interesting ways,but the end result would invariably fail to resemble a realistic looking scene(e.g. quadratic light falloff will not appear quadratic anymore, highlightswill be overblown and will exhibit some weird hue and saturation shifts etc.)
To demonstrate the first mistake using my own raytracer, the left image below shows a very simple butotherwise quite natural looking image in terms of physical lighting accuracy.This rendering took place in linear space and then the contents of theframebuffer were converted to sRGB before writing it to disk.
On the right image, however, this last conversion step was omitted and I triedto tweak the light intensities in an attempt to match the overall brightnessof the gamma-correct image. Well, it’s quite apparent that this is not goingto work. Everything appears too contrasty and oversaturated, so we’d probablyneed to desaturate all material colours a bit maybe use some more fill lightsto come closer to the look of the left image. But this is a losing battle; noamount of tweaking will make the image correct in the physical sense, and evenif we got it to an acceptable level for one particular scene with a particularlighting setup, any further changes to the scene would potentially necessitateanother round of tweaks to make the result look realistic again. Even moreimportantly, the material and lighting parameters we would need to choosewould be completely devoid of any physical meaning whatsoever; they’ll be justa random set of numbers that happen to produce an OK looking image for thatparticular scene, and thus not transferable to other scenes or lightingconditions. It’s a lot of wasted energy to work like that.
It’s also important to point out that incorrect gamma handling in 3D renderingis one of the main culprits behind the “fake plasticky CGI look” in some(mostly older) games. As illustrated on the image below, rendering realisticlooking human skin is almost impossible with a gamma-incorrect workflow; thehighlights will just never look right. This gave birth to questionablepractices such as compensating for the wrong highlights in the specular mapswith inverted hues and all sorts of other nastiness instead of fixing theproblem right at the source…
Conclusion
This is pretty much all there is to gamma encoding and decoding.Congratulations for making it so far, now you’re an officially certifiedgamma-compliant developer! :)
To recap, the only reason to use gamma encoding for digital images is becauseit allows us to store images more efficiently on a limited bit-length. Ittakes advantage of a characteristic of human vision that we perceivebrightness in a logarithmic way. Most image processing algorithms expect pixeldata with linearly encoded light intensities, therefore gamma-encoded imagesneed to be gamma-decoded (converted to linear space) first before we can runthese algorithms on them. Often the results need to be converted back togamma-space to store them on disk or to display them on graphics hardware thatexpects gamma-encoded values (most consumer-level graphics hardware fall intothis category). The de-facto standard sRGB colourspace uses a gamma ofapproximately 2.2. That’s the default colourspace for images on the Internetand for most monitors, scanners and printers. When in doubt, just use sRGB.
From the end-user perspective, keep in mind that most applications andsoftware libraries do not handle gamma correctly, therefore always make sureto do extensive testing before adopting them into your workflow. For a properlinear workflow, all software used in the chain has to be 100%gamma-correct.
And if you’re a developer working on graphics software, please make sureyou’re doing the correct thing. Be gamma-correct and always explicitly stateyour assumptions about the input and output colour spaces in the software’sdocumentation.
May all your lights be linear! :)
References & further reading
General gamma/sRGB info
Linear lighting & workflow (LWF)
- Jeremy Birn – Top Ten Tips for More Convincing Lighting and Rendering – (Section 1. Use a Linear Workflow)
Bonus stuff
Only if your operating system is Mac OS X 10.6 or higher or Linux. ↩
Comments
Please enable JavaScript to view the comments.
Table of contents
- What is gamma and why do we need it?
- Effects of gamma-incorrectness
- References & further reading
A short quiz
If you have ever written, or are planning to write, any kind of code thatdeals with image processing, you should complete the below quiz. If you have answeredone or more questions with a yes, there’s a high chance that your code isdoing the wrong thing and will produce incorrect results. This might not beimmediately obvious to you because these issues can be subtle and they’reeasier to spot in some problem domains than in others.
So here’s the quiz:
- I don’t know what gamma correction is (duh!)
- Gamma is a relic from the CRT display era; now that almost everyone usesLCDs, it’s safe to ignore it.
- Gamma is only relevant for graphics professionals working in the printindustry where accurate colour reproduction is of greatimportance—for general image processing, it’s safe to ignore it.
- I’m a game developer, I don’t need to know about gamma.
- The graphics libraries of my operating system handle gamma correctly.1
- The popular graphics library <insert name here> I’m using handles gamma correctly.
- Pixels with RGB values of (128, 128, 128) emit about half as much light aspixels with RGB values of (255, 255, 255).
- It is okay to just load pixel data from a popular image format (JPEG, PNG,GIF etc.) into a buffer using some random library and run image processingalgorithms on the raw data directly.
Don’t feel bad if you have answered most with a yes! I would have givena yes to most of these questions a week ago myself too. Somehow, the topicof gamma is just under most computer users’ radar (including programmerswriting commercial graphics software!), to the extent that most graphicslibraries, image viewers, photo editors and drawing software of today stilldon’t get gamma right and produce incorrect results.
So keep on reading, and by the end of this article you’ll be moreknowledgeable about gamma than the vast majority of programmers!
Bicyclism: Art Of Riding Mac Os Download
The arcane art of gamma-correctness
Given that vision is arguably the most important sensory input channel forhuman-computer interaction, it is quite surprising that gamma correction isone of the least talked about subjects among programmers and it’s mentioned intechnical literature rather infrequently, including computer graphics texts.The fact that most computer graphics textbooks don’t explicitly mention theimportance of correct gamma handling, or discuss it in practical terms, doesnot help matters at all (my CG textbook fromuni falls squarely into thiscategory, I’ve just checked). Some books mention gamma correction in passingin somewhat vague and abstract terms, but then provide neither concretereal-world examples on how to do it properly, nor explain what theimplications of not doing it properly are, nor show image examples ofincorrect gamma handling.
I came across the need for correct gamma handling during writing my raytracer and I had to admit that my understanding of thetopic was rather superficial and incomplete. So I had spent a few days readingup on it online, but it turned out that many articles about gamma are not muchhelp either, as many of them are too abstract and confusing, some contain toomany interesting but otherwise irrelevant details, and then some others lackimage examples or are just simply incorrect or hard to understand. Gamma isnot a terribly difficult concept to begin with, but for some mysterious reasonit’s not that trivial to find articles on it that are correct, complete andexplain the topic in a clear language.
What is gamma and why do we need it?
Alright, so this is my attempt to offer a comprehensive explanation of gamma,focusing just on the most important aspects and assuming no prior knowledge ofit.
The image examples in this article assume that you are viewing this web page ina modern browser on a computer monitor (CRT or LCD, doesn’t matter). Tabletsand phones are generally quite inaccurate compared to monitors, so try toavoid those. You should be viewing the images in a dimly lit room, so nodirect lights or flare on your screen please.
Light emission vs perceptual brightness
Believe it or not, the difference of light energy emission between any twoneighbouring vertical bars in the below image is a constant. In other words,the amount of light energy emitted by your screen increases by a constantamount from bar to bar, left to right.
Now consider the following image:
On which image does the gradation appear more even? It’s the second one! Butwhy is that so? We have just established that in the first image the bars areevenly (linearly) spaced in terms of emitted light intensity between thedarkest black and brightest white your monitor is capable of reproducing. Butwhy don’t we see that as a nice even gradation from black to white then? Andwhat is being displayed on the second image that we perceive as a lineargradation?
The answer lies in the response of the human eye to light intensity, which isnon-linear. One the first image, the difference between the nominal lightintensity of any two neighbouring bars is constant:
https://travelsdoln.weebly.com/get-dis-money-itch-mac-os.html. $$Δ_{linear} = I_n-I_{n-1}$$
On the second image, however, this difference is not constant but changes frombar to bar; it follows a power law relationship, to be exact. All humansensory perception follows a similar power lawrelationship in terms ofthe magnitude of stimulus and its perceived intensity.
Because of this, we say that there is a power law relationship betweennominal physical light intensity and perceptual brightness.
Physical vs perceptual linearity
Let’s say we wanted to store a representation of the following real-worldobject as an image file on the computer (let’s pretend for a momentthat perfect greyscale gradients exist in the real world, okay?) Here’s howthe “real world object” looks like:
Now, let’s pretend that we can only store 5-bit greyscale images on thisparticular computer system, which gives us 32 distinct shades of grey rangingfrom absolute black to absolute white. Also, on this computer, greyscalevalues are proportional with their corresponding physical light intensities,which will result in a 32-element greyscale as shown on Figure 1. We can saythat this greyscale is linear in terms of light emission betweensuccessive values.
If we encoded our smooth gradient using only these 32 grey values, we would getsomething like this (let’s just ignore dither for now to keep things simple):
Well, the transitions are rather abrupt, especially on the left side, becausewe only had 32 grey values to work with. If we squint a little, it’s easy toconvince ourselves that this is a more or less “accurate” representation ofthe smooth gradient, as far as our limited bit-depth allows it. But note howthe steps are much larger on the left side than on the right—this is becausewe are using a greyscale that is linear in terms of emitted lightintensity, but as we have mentioned before, our eyes don’t perceive lightintensity in a linear way!
This observation has some interesting implications. The error between theoriginal and the 5-bit encoded version is uneven across the image; it’s muchlarger for dark values than for light ones. In other words, we are losingrepresentational precision for dark values and are using relatively too muchprecision for lighter shades. Clearly, we’d be better off choosinga different set of 32 greys for our limited palette of shades that would makethis error evenly distributed across the whole range, so both dark and lightshades would be represented with the same precision. If we encoded ouroriginal image with such a greyscale that is perceptually linear, butconsequently non-linear in terms of emitted light intensity, and thatnon-linearity would match that of the human vision, we’d get the exact samegreyscale image we have already seen in Figure 2:
The non-linearity we’re talking about here is the power law relationshipwe mentioned before, and the non-linear transformation we need to apply to ourphysically linear greyscale values to transform them into perceptuallylinear values is called gamma correction.
Efficient image encoding
Why is the all the above important? Colour data in so-called “true colour” or“24-bit” bitmap images is stored as three 8-bit integers per pixel. With8 bits, 256 distinct intensity levels can be represented, and if the spacingof these levels were physically linear, we would be losing a lot of precisionon dark shades while being unnecessarily precise on light shades (relativelyspeaking), as shown above.
Clearly, this is not ideal. One solution would be to simply keep using thephysically linear scale and increase the bit depth per channel to 16 (ormore). This would double the storage requirements (or worse), which was notan option when most common image formats were invented. Therefore, a differentapproach was taken. The idea was to let the 256 distinct levels representintensity values on a perceptually linear scale instead, in which case thevast majority of images could be adequately represented on just 8 bits percolour channel.
The transformation used to represent the physically linear intensity dataeither generated synthetically via an algorithm or captured by a linear device(such as a CMOS of a digital camera or a scanner) with the discrete values ofthe perceptually linear scale is called gamma encoding.
The 24-bit RGB colourmodel(RGB24) used on virtually all consumer level electronic devices uses 8-bitgamma encodedvalues perchannel to represent light intensities. If you recall what we discussedearlier, this means that pixels with RGB(128, 128, 128) will not emitapproximately 50% the light energy of pixels with RGB(255, 255, 255), but onlyabout 22%! That makes perfect sense! Because of the non-linear nature of humanvision, a light source needs to be attenuated to about 22% of its originallight intensity to appear half as bright to humans. RGB(128, 128, 128)appears to be half as bright as RGB(255, 255, 255) to us! If you find thisconfusing, reflect a bit on it because it’s crucial to have a solidunderstanding of what has been discussed so far (trust me, it will only getmore confusing).
Of course, gamma encoding is always done with the assumption that the image isultimately meant to be viewed by humans on computer screens. In some way, youcan think of it as a lossy MP3 like compression but for images. For otherpurposes (e.g. scientific analysis or images meant for furtherpost-processing), using floats and sticking with the linear scale is oftena much better choice, as we’ll later see.
The gamma transfer function
The process of converting values from linear space to gamma space is calledgamma encoding (or gamma compression), and the reverse gammadecoding (or gamma expansion).
The formulas for these two operations are very simple, we only need to use theaforementioned power law function:
$$V_{encoded} = V_{linear} ^ {1/γ}$$
$$V_{linear} = V_{encoded} ^ {γ}$$
The standard gamma (γ) value to use in computer display systems is2.2. The main reason for this is because a gamma of 2.2 approximatelymatches the power law sensitivity of human vision. The exact value that shouldbe used varies from person to person and also depends on the lightingconditions and other factors, but a standard value had to be chosen and 2.2was good enough. Don’t be too hung up on this.
Now, a very important point that many texts fail to mention is that the inputvalues have to be in the 0 to 1 range and the output will be consequentlymapped to the same range too. From this follows the slightly counter-productivefact that gamma values between 0 and 1 are used for encoding(compression) and greater than 1 for decoding (expansion).The below charts demonstrate the gamma transfer functions for encoding anddecoding, plus the trivial linear gamma (γ=1.0) case:
We have only seen greyscale examples so far, but there’s nothing special aboutRGB images—we just simply need to encode or decode each colour channelindividually using the same method.
Gamma vs sRGB
sRGB is a colour space that is thede-facto standard for consumer electronic devices nowadays, includingmonitors, digital cameras, scanners, printers and handheld devices. It isalso the standard colour space for images on the Internet.
The sRGB specification defines what gamma to use for encoding and decodingsRGB images (among other things such as colour gamut, but these are notrelevant to our current discussion). sRGB gamma is very close to a standardgamma of 2.2, but it has a short linear segment in the very dark range toavoid a slope of infinity at zero (this is more convenient in numericcalculations). The formulas to convert from linear to sRGB and back can befoundhere.
You don’t actually need to understand all these finer details; the importantthing to know is that in 99% of the cases you’ll want to use sRGB instead ofplain gamma. The reason for this is that all graphics cards have hardware sRGBsupport since 2005 or so, so decoding and encoding is virtually for free mostof the time. The native colour space of your monitor is most likely sRGB(unless it’s a professional monitor for graphics, photo or video work) so ifyou just chuck an sRGB encoded pixel data into the framebuffer, the resultingimage will look correct on the screen (given the monitor is properlycalibrated). Popular image formats such as JPEG and PNG can store colour spaceinformation, but very often images don’t contain such data, in which casevirtually all image viewers and browsers will interpret them as sRGB byconvention.
Gamma calibration
We have talked about gamma encoding and decoding so far, but what is gammacalibration then? I found this bit slightly confusing too, so let me clearit up.
As mentioned, 99% of all monitors today use the sRGB colour space natively,but due to manufacturing inaccuracies most monitors would benefit from someadditional gamma calibration to achieve the best results. Now, if you nevercalibrated your monitor, that doesn’t mean that it will not use gamma! That issimply impossible, most CRT and LCD displays in the past and present have beendesigned and manufactured to operate in sRGB.
Think of gamma calibration as fine tuning. Your monitor will always operate insRGB, but by calibrating it (either in the video card driver or on the OSlevel) the monitor’s gamma transfer curve will more closely match the idealgamma transfer function we discussed earlier. Also, years ago it was possibleto shoot yourself in the foot in various creative ways by applying multiplegamma correction stages in the graphics pipeline (e.g. video card, OS andapplication level), but fortunately this is handled more intelligentlynowadays. For example, on my Windows 7 box, if I turn on gamma calibration inthe NVIDIA Control Panel then the OS level calibration will be disabled andvice versa.
Processing gamma-encoded images
So, if virtually the whole world defaults to sRGB, what is exactly theproblem? If our camera writes sRGB JPEG files, we can just decode the JPEGdata, copy it into the framebuffer of the graphics card and the image would bedisplayed correctly on our sRGB LCD monitor (where “correctly” means it wouldmore or less accurately represent the photographed real-world scene).
The problem will happen in the moment we start running any image processingalgorithms on our sRGB pixel buffer directly. Remember, gamma encoding isa non-linear transformation and sRGB encoding is basically just a funky way ofdoing gamma encoding of around γ=1/2.2. Virtually all image processingalgorithms you will find in any computer graphics text will assume pixel datawith linearly encoded light intensities, which means that feeding thesealgorithms with sRGB encoded data will render the results subtly—or insome cases quite obviously—wrong! This includes resizing, blurring,compositing, interpolating between pixel values, antialiasing and so on, justto name the most common operations!
Effects of gamma-incorrectness
Alright, enough theory talk, show me how these errors actually look like!That’s exactly what we’ll do in this section; we will examine the most commonscenarios when running image processing algorithms directly on sRGB data wouldmanifest in incorrect results. Apart from illustrative purposes, theseexamples are also useful for spotting gamma-incorrect behaviour or bugs indrawing programs and image processing libraries.
It must be noted that I have chosen examples that clearly demonstrate theproblems with gamma-incorrectness. In most cases, the issues are the mostobvious when using vivid, saturated colours. With more muted colours, thedifferences might be less noticeable or even negligible in some cases.However, the errors are always present, and image processing programs should workcorrectly for all possible inputs, not just okayish for 65.23% of all possibleimages… Also, in the area of physically based rendering gamma correctness isan absolute must, as we’ll see.
Bicyclism: Art Of Riding Mac Os 11
Gradients
The image below shows the difference between gradients calculated in linear(top gradient) and sRGB space (bottom gradient). Note how direct interpolationon the sRGB values yields much darker and sometimes more saturated lookingimages.
Just going by the looks, one might prefer the look of the sRGB-space versions,especially for the last two. However, that’s not how light would behavein the real world (imagine two coloured light sources illuminating a whitewall; the colours would mix as in the linear-space case).
Almost everybody does this the wrong way: CSS gradients and transitions arewrong (see thisthread fordetails), Photoshop is wrong (as of version CS6) and there’s not even an optionto fix it.
Two drawing programs that got this (and gamma-correctness in general) rightare Krita and Pixelmator.SVG also let’s the user tospecifywhether to use linear or sRGB-space interpolations for gradients, compositingand animations.
Colour blending
Bicyclism: Art Of Riding Mac Os Catalina
Drawing with soft brushes in gamma-incorrect drawing programs can result inweird darkish transition bands with certain vivid colour combinations.This is really a variation of the gradient problem if you think about it (thetransition band of a soft brush is nothing else than a small gradient).
Some random people claimed on the Adobe forums that by doing this Photoshop isreally mimicking how mixing paints would work in real life. Well, no, ithas nothing to do with that. It’s just the result of naive programming to workdirectly on the sRGB pixel data and now we’re stuck with that as the defaultlegacy behaviour.
Alpha blending / compositing
How to boot macbook air from usb. As another variation on colour blending, let’s see how alpha blending holdsup. We’ll examine some coloured rectangles first. As expected, thegamma-correct image on the left mimics how light would behave in real life,while the sRGB space blending on the right exhibits some weird hue andbrightness shifts.
The appearance of false colours is also noticeable when blending two photostogether. On the gamma-correct image on the left, the skin tones and the redsand yellows are preserved but faded into the bluish image in a natural way,while on the right image there’s a noticeable overall greenish cast. Again,this might be an effect you like, but it’s not how accurate alphacompositing should work.
Image resizing
These examples will only work if your browser doesn’t do any rescaling on theimages below. Also, note that screens of mobile devices are more inaccuratewith regards to gamma than regular monitors, so for best results try to viewthis on a desktop computer.
The image below contains a simple black and white checkerboard pixel pattern(100% zoom on the left, 400% zoom on the right). The black pixels areRGB(0,0,0), the minimum light intensity your monitor is capable of producing,and the white ones RGB(255,255,255), which is the maximum intensity. Now, ifyou squint a little, your eyes will blur (average) the light coming from theimage, so you will see a grey that’s halfway in intensity between absoluteblack and white (therefore it’s referred to as 50% grey).
From this follows that if we resized the image by 50%, a similar averagingprocess should happen, but now algorithmically on the pixel data. We expectto get a solid rectangle filled with the same 50% grey that we saw when wesquinted.
Let’s try it out! On the image below, A is the checkerboard pattern, B theresult of resizing the pattern by 50% directly in sRGB-space (using bicubicinterpolation), and C the resizing it in linear space, then converted tosRGB.
Unsurprisingly, C gives the correct result, but the shade of grey might notbe an exact match for the blurred checkerboard pattern on your monitor ifit’s not properly gamma-calibrated. Even the math shows this clearly: a 50%grey pixel that emits half as much light as a white pixel should have a RGBvalue of around (186,186,186), gamma-encoded:
$$0.5^{1/2.2} ≈ 0.72974$$$$0.72974·255 = 186$$
(Don’t worry that on the image the 50% grey is RGB(187,187,187). That smalldifference is because the image is sRGB-encoded, but I used the much simplergamma formula for my calculation here.)
Gamma-incorrect resizing can also result in weird hue shifts on some images.For more details, read Eric Brasseur’s excellentarticle on the matter.
Antialiasing
I guess it’s no surprise at this point that antialiasing is no exception whenit comes to gamma-correctness. Antialiasing in γ=2.2 space results in overlydark “smoothing pixels” (right image); the text appears too heavy, almost asif it was bold. Running the algorithm in linear space produces much betterresults (left image), although in this case the font looks a bit too thin.Interestingly, Photoshop antialiases text using γ=1.42 by default, and thisindeed seems to yield the best looking results (middle image). The reason forthis is that most fonts have been designed for gamma-incorrect fontrasterizers, hence if you use linear space (correctly), then the fonts willlook thinner than they should.
Physically-based rendering
If there’s a single area where gamma-correctness is an absolute must, that’sphysically-based rendering (PBR). To obtain realistic looking results, gammashould be handled correctly throughout the whole graphics pipeline. There’sso many ways to screw this up, but these are the two most common ways:
- Doing the calculations in linear space but failing to convert the finalimage to sRGB and then “tweaking” various material and lighting parametersto compensate.
- Failing to convert sRGB texture images to linear space (or set the sRGB flagwhen hardware acceleration is used).
These two basic errors are then usually combined in various interesting ways,but the end result would invariably fail to resemble a realistic looking scene(e.g. quadratic light falloff will not appear quadratic anymore, highlightswill be overblown and will exhibit some weird hue and saturation shifts etc.)
To demonstrate the first mistake using my own raytracer, the left image below shows a very simple butotherwise quite natural looking image in terms of physical lighting accuracy.This rendering took place in linear space and then the contents of theframebuffer were converted to sRGB before writing it to disk.
On the right image, however, this last conversion step was omitted and I triedto tweak the light intensities in an attempt to match the overall brightnessof the gamma-correct image. Well, it’s quite apparent that this is not goingto work. Everything appears too contrasty and oversaturated, so we’d probablyneed to desaturate all material colours a bit maybe use some more fill lightsto come closer to the look of the left image. But this is a losing battle; noamount of tweaking will make the image correct in the physical sense, and evenif we got it to an acceptable level for one particular scene with a particularlighting setup, any further changes to the scene would potentially necessitateanother round of tweaks to make the result look realistic again. Even moreimportantly, the material and lighting parameters we would need to choosewould be completely devoid of any physical meaning whatsoever; they’ll be justa random set of numbers that happen to produce an OK looking image for thatparticular scene, and thus not transferable to other scenes or lightingconditions. It’s a lot of wasted energy to work like that.
It’s also important to point out that incorrect gamma handling in 3D renderingis one of the main culprits behind the “fake plasticky CGI look” in some(mostly older) games. As illustrated on the image below, rendering realisticlooking human skin is almost impossible with a gamma-incorrect workflow; thehighlights will just never look right. This gave birth to questionablepractices such as compensating for the wrong highlights in the specular mapswith inverted hues and all sorts of other nastiness instead of fixing theproblem right at the source…
Conclusion
This is pretty much all there is to gamma encoding and decoding.Congratulations for making it so far, now you’re an officially certifiedgamma-compliant developer! :)
To recap, the only reason to use gamma encoding for digital images is becauseit allows us to store images more efficiently on a limited bit-length. Ittakes advantage of a characteristic of human vision that we perceivebrightness in a logarithmic way. Most image processing algorithms expect pixeldata with linearly encoded light intensities, therefore gamma-encoded imagesneed to be gamma-decoded (converted to linear space) first before we can runthese algorithms on them. Often the results need to be converted back togamma-space to store them on disk or to display them on graphics hardware thatexpects gamma-encoded values (most consumer-level graphics hardware fall intothis category). The de-facto standard sRGB colourspace uses a gamma ofapproximately 2.2. That’s the default colourspace for images on the Internetand for most monitors, scanners and printers. When in doubt, just use sRGB.
From the end-user perspective, keep in mind that most applications andsoftware libraries do not handle gamma correctly, therefore always make sureto do extensive testing before adopting them into your workflow. For a properlinear workflow, all software used in the chain has to be 100%gamma-correct.
And if you’re a developer working on graphics software, please make sureyou’re doing the correct thing. Be gamma-correct and always explicitly stateyour assumptions about the input and output colour spaces in the software’sdocumentation.
May all your lights be linear! :)
References & further reading
General gamma/sRGB info
Linear lighting & workflow (LWF)
- Jeremy Birn – Top Ten Tips for More Convincing Lighting and Rendering – (Section 1. Use a Linear Workflow)
Bonus stuff
Only if your operating system is Mac OS X 10.6 or higher or Linux. ↩
Comments
Please enable JavaScript to view the comments.
Bicyclism: Art Of Riding Mac OS